[Gluster-users] geo-replication status faulty

Venky Shankar yknev.shankar at gmail.com
Fri Jun 27 06:13:21 UTC 2014


Hey Chris,

‘/nonexistent/gsyncd’ is purposely used in the ssh connection so as to
avoid insecure access via ssh. Fiddling with remote_gsyncd should be
avoided (it's a reserved option anyway).

As the log messages say, there seems to be a misconfiguration in the setup.
Could you please list down the steps you're using for setting up a geo-rep
session?


On Fri, Jun 27, 2014 at 5:59 AM, Chris Ferraro <ChrisFerraro at fico.com>
wrote:

>  Venky Shankar, can you follow up on these questions?  I too have this
> issue and cannot resolve the reference to ‘/nonexistent/gsyncd’.
>
>
>
> As Steve mentions, the nonexistent reference in the logs looks like the
> culprit especially seeing that the ssh command trying to be run is printed
> on an earlier line with the incorrect remote path.
>
>
>
> I have followed the configuration steps as documented in the guide, but
> still hit this issue.
>
>
>
> Thanks for any help
>
>
>
> Chris
>
>
>
> # Master geo replication log grab #
>
>
>
> [2014-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor:
> ------------------------------------------------------------
>
> [2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor:
> starting gsyncd worker
>
> [2014-06-26 17:09:09.358588] I
> [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing:
> gluster://localhost:gluster_vol0 -> ssh://root@node003
> :gluster://localhost:gluster_vol1
>
> [2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor:
> ------------------------------------------------------------
>
> [2014-06-26 17:09:09.540030] I [monitor(monitor):130:monitor] Monitor:
> starting gsyncd worker
>
> [2014-06-26 17:09:10.137434] I
> [gsyncd(/data/glusterfs/vol0/brick1/brick):532:main_i] <top>: syncing:
> gluster://localhost:gluster_vol0 -> ssh://root@node003
> :gluster://localhost:gluster_vol1
>
> [2014-06-26 17:09:10.258044] E
> [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception]
> <top>: connection to peer is broken
>
> [2014-06-26 17:09:10.259278] W
> [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception]
> <top>: !!!!!!!!!!!!!
>
> [2014-06-26 17:09:10.260755] W
> [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception]
> <top>: !!! getting "No such file or directory" errors is most likely due to
> MISCONFIGURATION, please consult
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
>
> [2014-06-26 17:09:10.261250] W
> [syncdutils(/data/glusterfs/vol0/brick0/brick):231:log_raise_exception]
> <top>: !!!!!!!!!!!!!
>
> [2014-06-26 17:09:10.263020] E
> [resource(/data/glusterfs/vol0/brick0/brick):204:errlog] Popen: command
> "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-DIu2bR/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock
> root at node003 /nonexistent/gsyncd --session-owner
> 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120
> gluster://localhost:gluster_vol1" returned with 127, saying:
>
> [2014-06-26 17:09:10.264806] E
> [resource(/data/glusterfs/vol0/brick0/brick):207:logerr] Popen: ssh> bash:
> /nonexistent/gsyncd: No such file or directory
>
> [2014-06-26 17:09:10.266753] I
> [syncdutils(/data/glusterfs/vol0/brick0/brick):192:finalize] <top>: exiting.
>
> [2014-06-26 17:09:11.4817] E
> [syncdutils(/data/glusterfs/vol0/brick1/brick):223:log_raise_exception]
> <top>: connection to peer is broken
>
> [2014-06-26 17:09:11.5966] W
> [syncdutils(/data/glusterfs/vol0/brick1/brick):227:log_raise_exception]
> <top>: !!!!!!!!!!!!!
>
> [2014-06-26 17:09:11.6467] W
> [syncdutils(/data/glusterfs/vol0/brick1/brick):228:log_raise_exception]
> <top>: !!! getting "No such file or directory" errors is most likely due to
> MISCONFIGURATION, please consult
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html
>
> [2014-06-26 17:09:11.7822] W
> [syncdutils(/data/glusterfs/vol0/brick1/brick):231:log_raise_exception]
> <top>: !!!!!!!!!!!!!
>
> [2014-06-26 17:09:11.8938] E
> [resource(/data/glusterfs/vol0/brick1/brick):204:errlog] Popen: command
> "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-N0sS2H/139ffeb96cd3f82a30d4e4ff1ff33f0b.sock
> root at node003 /nonexistent/gsyncd --session-owner
> 46d54a00-06a5-4e92-8ea4-eab0aa454c22 -N --listen --timeout 120
> gluster://localhost:gluster_vol1" returned with 127, saying:
>
> [2014-06-26 17:09:11.10252] E
> [resource(/data/glusterfs/vol0/brick1/brick):207:logerr] Popen: ssh> bash:
> /nonexistent/gsyncd: No such file or directory
>
> [2014-06-26 17:09:11.12820] I
> [syncdutils(/data/glusterfs/vol0/brick1/brick):192:finalize] <top>: exiting.
>
> [2014-06-26 17:09:11.274421] I [monitor(monitor):157:monitor] Monitor:
> worker(/data/glusterfs/vol0/brick0/brick) died in startup phase
>
> [2014-06-26 17:09:12.18722] I [monitor(monitor):157:monitor] Monitor:
> worker(/data/glusterfs/vol0/brick1/brick) died in startup phase
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140627/152bebe2/attachment.html>


More information about the Gluster-users mailing list