[Gluster-users] geo-replication status faulty

Chris Ferraro ChrisFerraro at fico.com
Tue Jul 1 18:06:52 UTC 2014


Yes, the  authorized_keys on the slave nodes have the master nodes keys prepended with "command=..."

Geo-replication is now functioning in my environment as described in the docs.  The /var/lib/glusterd/geo-replication/.../gsyncd.conf file did not need any modifications and I can confirm that the "remote_gsyncd = /nonexistent/gsyncd" entry in the gsyncd.conf file does work.

I ended up redoing the geo-replication from scratch; deleting the replication session, recreating common ssh keys via gsec_create, and banging my head into the wall a few times.  I'm not sure where the mis-configuration was, but I would consider this resolved for me.

Thanks again for all your help Venky


From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Venky Shankar
Sent: Tuesday, July 01, 2014 2:15 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] geo-replication status faulty

On 06/28/2014 12:33 AM, Chris Ferraro wrote:

Thanks for the response Venky,



Here's a rundown on my environment and what I did to get geo-replication set up.  Let me know if you'd like any additional information.



master volume - gluster_vol0 (node001 and node002)

slave volume     - gluster_vol1 (node003 and node004)



## Password-less SSH access is set up between the master volume node001 and slave node003



The root user uses /root/.ssh/id_rsa to auth from the master volume node001 to the slave node003



tested and working as expected





## Ran `gluster system:: execute gsec_create` from master volume node001 to create common pem file



## Created the geo-replication session with



`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 create push-pem`



results of command were successful



## Checked status



`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 status`



results showed node001 and node002 bricks from master and node003 as slave.  status stopped.



## Started geo-rep session



`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 start`



Starting geo-replication session between gluster_vol0 & node003::gluster_vol1 has been successful



## Check status



`gluster volume geo-replication gluster_vol0 node003::gluster_vol1 status`



status of all entries is faulty



##  Other





*         common_secre.pem.pub file exists on both slave nodes.





*         /usr/libexec/glusterfs/gsyncd exists on all master and slave nodes





*         gsyncd.py process running on both master volume nodes





*         /var/lib/glusterd/geo-replication/gsyncd.conf does not exist.  Only gsyncd_template.conf





*         Master geo-replication log shows following:



[2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken

[2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html

[2014-06-26 17:09:10.261250] W [syncdutils(/data/glusterfs/vol0/brick0/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:10.263020] E [resource(/data/glusterfs/vol0/brick0/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:

[2014-06-26 17:09:10.264806] E [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory

[2014-06-26 17:09:10.266753] I [syncdutils(/data/glusterfs/vol0/brick0/brick):192:finalize] <top>: exiting.





Thanks again for any help



Chris









From: Venky Shankar [mailto:yknev.shankar at gmail.com]

Sent: Thursday, June 26, 2014 11:13 PM

To: Chris Ferraro

Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>

Subject: Re: [Gluster-users] geo-replication status faulty



Hey Chris,



'/nonexistent/gsyncd' is purposely used in the ssh connection so as to avoid insecure access via ssh. Fiddling with remote_gsyncd should be avoided (it's a reserved option anyway).



As the log messages say, there seems to be a misconfiguration in the setup. Could you please list down the steps you're using for setting up a geo-rep session?



On Fri, Jun 27, 2014 at 5:59 AM, Chris Ferraro <ChrisFerraro at fico.com<mailto:ChrisFerraro at fico.com><mailto:ChrisFerraro at fico.com><mailto:ChrisFerraro at fico.com>> wrote:

Venky Shankar, can you follow up on these questions?  I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.



As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.



I have followed the configuration steps as documented in the guide, but still hit this issue.



Thanks for any help



Chris



# Master geo replication log grab #



[2014-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------

[2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker

[2014-06-26 17:09:09.358588] I [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root@node003:gluster://localhost:gluster_vol1<ssh://root@node003:gluster:/localhost:gluster_vol1>

[2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------

[2014-06-26 17:09:09.540030] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker

[2014-06-26 17:09:10.137434] I [gsyncd(/data/glusterfs/vol0/brick1/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root@node003:gluster://localhost:gluster_vol1<ssh://root@node003:gluster:/localhost:gluster_vol1>

[2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken

[2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html

[2014-06-26 17:09:10.261250] W [syncdutils(/data/glusterfs/vol0/brick0/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:10.263020] E [resource(/data/glusterfs/vol0/brick0/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:

[2014-06-26 17:09:10.264806] E [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory

[2014-06-26 17:09:10.266753] I [syncdutils(/data/glusterfs/vol0/brick0/brick):192:finalize] <top>: exiting.

[2014-06-26 17:09:11.4817] E [syncdutils(/data/glusterfs/vol0/brick1/brick):223:log_raise_exception] <top>: connection to peer is broken

[2014-06-26 17:09:11.5966] W [syncdutils(/data/glusterfs/vol0/brick1/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:11.6467] W [syncdutils(/data/glusterfs/vol0/brick1/brick):228:log_raise_exception] <top>: !!! getting "No such file or directory" errors is most likely due to MISCONFIGURATION, please consult https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html

[2014-06-26 17:09:11.7822] W [syncdutils(/data/glusterfs/vol0/brick1/brick):231:log_raise_exception] <top>: !!!!!!!!!!!!!

[2014-06-26 17:09:11.8938] E [resource(/data/glusterfs/vol0/brick1/brick):204:errlog] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-N0sS2H/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock root at node003 /nonexistent/gsyncd --session-owner 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120 gluster://localhost:gluster_vol1" returned with 127, saying:

[2014-06-26 17:09:11.10252] E [resource(/data/glusterfs/vol0/brick1/brick):207:logerr] Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory

[2014-06-26 17:09:11.12820] I [syncdutils(/data/glusterfs/vol0/brick1/brick):192:finalize] <top>: exiting.

[2014-06-26 17:09:11.274421] I [monitor(monitor):157:monitor] Monitor: worker(/data/glusterfs/vol0/brick0/brick) died in startup phase

[2014-06-26 17:09:12.18722] I [monitor(monitor):157:monitor] Monitor: worker(/data/glusterfs/vol0/brick1/brick) died in startup phase



This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.



_______________________________________________

Gluster-users mailing list

Gluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org>

http://supercolony.gluster.org/mailman/listinfo/gluster-users





This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.



This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.




_______________________________________________

Gluster-users mailing list

Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>

http://supercolony.gluster.org/mailman/listinfo/gluster-users
Does authorized_keys on the slave nodes have the master nodes keys prepended with "command=..." ?

-venky

This email and any files transmitted with it are confidential, proprietary and intended solely for the individual or entity to whom they are addressed. If you have received this email in error please delete it immediately.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140701/449a9a07/attachment.html>


More information about the Gluster-users mailing list