[Gluster-users] geo-replication status faulty

Chris Ferraro ChrisFerraro at fico.com
Fri Jun 27 23:02:18 UTC 2014


OK, got it working, but questioning how I got here.

# find /var/lib/glusterd/geo-replication/ -type f -name "gsyncd*"
/var/lib/glusterd/geo-replication/gluster_vol0_node003_gluster_vol1/gsyncd.conf
/var/lib/glusterd/geo-replication/gsyncd_template.conf

Both gsyncd.conf files had references to /nonexsistent/gsyncd.  Modified the …/gluster_vol0_node003_gluster_vol1/gsyncd.conf to point to /usr/libexec/glusterfs/gsyncd, then restarted the geo-replication session and everything is working.

No other changes were made to my setup.  So, noting your comment about this ‘nonexistent’ entry being purposeful, why is this now working?  And, do I need to change something else to get the same result?


From: Chris Ferraro
Sent: Friday, June 27, 2014 12:04 PM
To: 'Venky Shankar'
Cc: gluster-users at gluster.org
Subject: RE: [Gluster-users] geo-replication status faulty

Thanks for the response Venky,

Here’s a rundown on my environment and what I did to get geo-replication set up.  Let me know if you’d like any additional information.

master volume - gluster_vol0 (node001 and node002)
slave volume     - gluster_vol1 (node003 and node004)

## Password-less SSH access is set up between the master volume node001 and slave node003

The root user uses /root/.ssh/id_rsa to auth from the master volume node001 to the slave node003

tested and working as expected


## Ran `gluster system:: execute gsec_create` from master volume node001 to create common pem file

## Created the geo-replication session with

`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 create push-pem`

results of command were successful

## Checked status

`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 status`

results showed node001 and node002 bricks from master and node003 as slave.  status stopped.

## Started geo-rep session

`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 start`

Starting geo-replication session between gluster_vol0 & node003::gluster_vol1 has been successful

## Check status

`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 status`

status of all entries is faulty

##  Other


·         common_secre.pem.pub file exists on both slave nodes.


·         /usr/libexec/glusterfs/gsyncd exists on all master and slave nodes


·         gsyncd.py process running on both master volume nodes


·         /var/lib/glusterd/geo-replication/gsyncd.conf does not exist.  Only gsyncd_template.conf


·         Master geo-replication log shows following:

[2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken
[2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
[2014-06-26 17:09:10.261250] W [syncdutils(/data/glusterfs/vol0/brick0/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:10.263020] E [resource(/data/glusterfs/vol0/brick0/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:
[2014-06-26 17:09:10.264806] E [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
[2014-06-26 17:09:10.266753] I [syncdutils(/data/glusterfs/vol0/brick0/brick):192:finalize] <top>: exiting.


Thanks again for any help

Chris




From: Venky Shankar [mailto:yknev.shankar at gmail.com]
Sent: Thursday, June 26, 2014 11:13 PM
To: Chris Ferraro
Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Subject: Re: [Gluster-users] geo-replication status faulty

Hey Chris,

‘/nonexistent/gsyncd’ is purposely used in the ssh connection so as to avoid insecure access via ssh. Fiddling with remote_gsyncd should be avoided (it's a reserved option anyway).

As the log messages say, there seems to be a misconfiguration in the setup. Could you please list down the steps you're using for setting up a geo-rep session?

On Fri, Jun 27, 2014 at 5:59 AM, Chris Ferraro <ChrisFerraro at fico.com<mailto:ChrisFerraro at fico.com>> wrote:
Venky Shankar, can you follow up on these questions?  I too have this issue and cannot resolve the reference to ‘/nonexistent/gsyncd’.

As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.

I have followed the configuration steps as documented in the guide, but still hit this issue.

Thanks for any help

Chris

# Master geo replication log grab #

[2014-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2014-06-26 17:09:09.358588] I [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root@node003:gluster://localhost:gluster_vol1
[2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:09.540030] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2014-06-26 17:09:10.137434] I [gsyncd(/data/glusterfs/vol0/brick1/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root@node003:gluster://localhost:gluster_vol1
[2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken
[2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
[2014-06-26 17:09:10.261250] W [syncdutils(/data/glusterfs/vol0/brick0/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:10.263020] E [resource(/data/glusterfs/vol0/brick0/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:
[2014-06-26 17:09:10.264806] E [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
[2014-06-26 17:09:10.266753] I [syncdutils(/data/glusterfs/vol0/brick0/brick):192:finalize] <top>: exiting.
[2014-06-26 17:09:11.4817] E [syncdutils(/data/glusterfs/vol0/brick1/brick):223:log_raise_exception] <top>: connection to peer is broken
[2014-06-26 17:09:11.5966] W [syncdutils(/data/glusterfs/vol0/brick1/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:11.6467] W [syncdutils(/data/glusterfs/vol0/brick1/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
[2014-06-26 17:09:11.7822] W [syncdutils(/data/glusterfs/vol0/brick1/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:11.8938] E [resource(/data/glusterfs/vol0/brick1/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-N0sS2H/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:
[2014-06-26 17:09:11.10252] E [resource(/data/glusterfs/vol0/brick1/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
[2014-06-26 17:09:11.12820] I [syncdutils(/data/glusterfs/vol0/brick1/brick):192:finalize] <top>: exiting.
[2014-06-26 17:09:11.274421] I [monitor(monitor):157:monitor] Monitor: worker(/data/glusterfs/vol0/brick0/brick) died in startup phase
[2014-06-26 17:09:12.18722] I [monitor(monitor):157:monitor] Monitor: worker(/data/glusterfs/vol0/brick1/brick) died in startup phase

This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.

This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140627/92affd08/attachment.html>


More information about the Gluster-users mailing list