[Gluster-users] can't set up geo-replication: can't fetch slave details

Kingsley Tart gluster at gluster.dogwind.com
Tue Mar 14 19:31:20 UTC 2023


Hi,

using Gluster 9.2 on debian 11 I'm trying to set up geo replication. I
am following this guide:


https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh

I have a volume called "ansible" which is only a small volume and
seemed like an ideal test case.


Firstly, for a bit of feedback (this isn't my issue as I worked around
it) I had this problem with instructions from the above guide:

root at glusterA:/data/brick# gluster volume geo-replication ansible 
geoaccount at glusterX::ansible create push-pem
gluster command not found on glusterX for user geoaccount.
geo-replication command failed


Once I'd made a symlink to the gluster executable into the path for the
geoaccount user, I ran the command again, and then got this error:

root at glusterA:/data/brick# gluster volume geo-replication ansible 
geoaccount at glusterX::ansible create push-pem
Unable to mount and fetch slave volume details. Please check the log:
/var/log/glusterfs/geo-replication/gverify-slavemnt.log
geo-replication command failed

That log file contained this:

[2023-03-14 19:13:48.904461 +0000] I [MSGID: 100030]
[glusterfsd.c:2685:main] 0-glusterfs: Started running version
[{arg=glusterfs}, {version=9.2}, {cmdlinestr=glusterfs --xlator-
option=*dht.lookup-unhashed=off --volfile-server glusterX --volfile-id
ansible -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log
/tmp/gverify.sh.txIgka}] 
[2023-03-14 19:13:48.905883 +0000] I [glusterfsd.c:2421:daemonize] 0-
glusterfs: Pid of current running process is 3466942
[2023-03-14 19:13:48.912723 +0000] I [MSGID: 101190] [event-
epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
index [{index=1}] 
[2023-03-14 19:13:48.912759 +0000] I [MSGID: 101190] [event-
epoll.c:669:event_dispatch_epoll_worker] 0-epoll: Started thread with
index [{index=0}] 
[2023-03-14 19:13:48.914529 +0000] E [glusterfsd-
mgmt.c:2137:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume
file' from server
[2023-03-14 19:13:48.914549 +0000] E [glusterfsd-
mgmt.c:2338:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file
(key:ansible)
[2023-03-14 19:13:48.914739 +0000] W
[glusterfsd.c:1432:cleanup_and_exit] (-->/lib/x86_64-linux-
gnu/libgfrpc.so.0(+0xfe3b) [0x7f1ca0a6ce3b] --
>glusterfs(mgmt_getspec_cbk+0x762) [0x558ace8e8412] --
>glusterfs(cleanup_and_exit+0x57) [0x558ace8df4f7] ) 0-: received
signum (0), shutting down 
[2023-03-14 19:13:48.914767 +0000] I [fuse-bridge.c:7063:fini] 0-fuse:
Unmounting '/tmp/gverify.sh.txIgka'.
[2023-03-14 19:13:48.917289 +0000] I [fuse-bridge.c:7067:fini] 0-fuse:
Closing fuse connection to '/tmp/gverify.sh.txIgka'.
[2023-03-14 19:13:48.917358 +0000] W
[glusterfsd.c:1432:cleanup_and_exit] (-->/lib/x86_64-linux-
gnu/libpthread.so.0(+0x7ea7) [0x7f1ca0a28ea7] --
>glusterfs(glusterfs_sigwaiter+0xc5) [0x558ace8e7175] --
>glusterfs(cleanup_and_exit+0x57) [0x558ace8df4f7] ) 0-: received
signum (15), shutting down 


In case it was a permissions issue, I copied the ssh pubkey to root's
authorized_keys file on the replication slave and tried again with
root@ but the error was the same.

Where should I look next?

Cheers,
Kingsley.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230314/42f40228/attachment.html>


More information about the Gluster-users mailing list