[Gluster-users] Problem on Geo Replication

Crescenzo Cuoppolo crazy at crazyworlds.org
Mon May 4 10:50:14 UTC 2015


Hi,
I try to configure gluster with distribuited geo-replication, but when I
try to run

*gluster volume geo-replication gv0 web1:gv-rep0 create push-pem *

The system return to me this:

Unable to fetch slave volume details. Please check the slave cluster and
slave volume.
geo-replication command failed


On /var/log/glusterfs/geo-replication-slaves/slave.log I found this:


*[2015-05-04 10:39:41.146746] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
0-gv0-client-0: changing port to 49152 (from 0)*
*[2015-05-04 10:39:41.152968] I
[client-handshake.c:1413:select_server_supported_programs] 0-gv0-client-0:
Using Program GlusterFS 3.3, Num (1298437), Version (330)*
*[2015-05-04 10:39:41.153155] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
0-gv0-client-1: changing port to 49152 (from 0)*
*[2015-05-04 10:39:41.158970] I
[client-handshake.c:1200:client_setvolume_cbk] 0-gv0-client-0: Connected to
gv0-client-0, attached to remote volume '/export/sdb1/brick'.*
*[2015-05-04 10:39:41.159010] I
[client-handshake.c:1210:client_setvolume_cbk] 0-gv0-client-0: Server and
Client lk-version numbers are not same, reopening the fds*
*[2015-05-04 10:39:41.159155] I [MSGID: 108005]
[afr-common.c:3669:afr_notify] 0-gv0-replicate-0: Subvolume 'gv0-client-0'
came back up; going online.*
*[2015-05-04 10:39:41.159303] I
[client-handshake.c:188:client_set_lk_version_cbk] 0-gv0-client-0: Server
lk version = 1*
*[2015-05-04 10:39:41.159728] I
[client-handshake.c:1413:select_server_supported_programs] 0-gv0-client-1:
Using Program GlusterFS 3.3, Num (1298437), Version (330)*
*[2015-05-04 10:39:41.160327] I
[client-handshake.c:1200:client_setvolume_cbk] 0-gv0-client-1: Connected to
gv0-client-1, attached to remote volume '/export/sdb1/brick'.*
*[2015-05-04 10:39:41.160361] I
[client-handshake.c:1210:client_setvolume_cbk] 0-gv0-client-1: Server and
Client lk-version numbers are not same, reopening the fds*
*[2015-05-04 10:39:41.166893] I [fuse-bridge.c:5080:fuse_graph_setup]
0-fuse: switched to graph 0*
*[2015-05-04 10:39:41.167594] I
[client-handshake.c:188:client_set_lk_version_cbk] 0-gv0-client-1: Server
lk version = 1*
*[2015-05-04 10:39:41.167848] I [fuse-bridge.c:4009:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
7.14*
*[2015-05-04 10:39:41.170020] I [afr-common.c:1477:afr_local_discovery_cbk]
0-gv0-replicate-0: selecting local read_child gv0-client-0*
*[2015-05-04 10:39:41.198572] I [fuse-bridge.c:4921:fuse_thread_proc]
0-fuse: unmounting /tmp/tmp.WZKJP4DEtX*
*[2015-05-04 10:39:41.199379] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (15), shutting down*
*[2015-05-04 10:39:41.199429] I [fuse-bridge.c:5599:fini] 0-fuse:
Unmounting '/tmp/tmp.WZKJP4DEtX'.*
*[2015-05-04 10:39:41.214037] I [MSGID: 100030] [glusterfsd.c:2018:main]
0-glusterfs: Started running glusterfs version 3.6.3 (args: glusterfs
--xlator-option=*dht.lookup-unhashed=off --volfile-server web1 --volfile-id
gv-rep0 -l /var/log/glusterfs/geo-replication-slaves/slave.log
/tmp/tmp.Scl1T4200S)*
*[2015-05-04 10:40:44.227408] E [socket.c:2276:socket_connect_finish]
0-glusterfs: connection to 192.168.10.64:24007 <http://192.168.10.64:24007>
failed (Connection timed out)*
*[2015-05-04 10:40:44.227607] E [glusterfsd-mgmt.c:1811:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: web1 (Transport
endpoint is not connected)*
*[2015-05-04 10:40:44.227639] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers*
*[2015-05-04 10:40:44.228011] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (1), shutting down*
*[2015-05-04 10:40:44.228071] I [fuse-bridge.c:5599:fini] 0-fuse:
Unmounting '/tmp/tmp.Scl1T4200S'.*
*[2015-05-04 10:40:44.234802] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (15), shutting down*

Where *192.168.10.64 is ip of web1. *
*Why the master want to connect on **192.168.10.64 to port 24007 if it
should work only via ssh ?*

*Best Regards*

*Crazyworlds*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150504/93232c00/attachment.html>


More information about the Gluster-users mailing list