[Gluster-users] geo replication session status faulty

Christos Tsalidis chtsalid at gmail.com
Fri Oct 26 08:47:14 UTC 2018


Hi all,

geo replication continues to give me the same problem in gluster 3.10.12
version

[2018-10-26 08:41:34.430634] E
[resource(/bricks/brick-a1/brick):234:errlog] Popen: command "ssh
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-1vprAE/05b8d7b5dab75575689c0e1a2ec33b3f.sock
geoaccount at servere /nonexistent/gsyncd --session-owner
4d94d1ea-6818-450a-8fa8-645a7d9d36b8 --local-id
.%2Fbricks%2Fbrick-a1%2Fbrick --local-node servera -N --listen --timeout
120 gluster://localhost:slavevol" returned with 1

Is there anyone who can assist me with this problem?

Thanks in advance!


Am Mi., 24. Okt. 2018 um 20:02 Uhr schrieb Christos Tsalidis <
chtsalid at gmail.com>:

> Hi all,
>
> I am testing the geo-replication service in gluster version 3.10.12 on
> centos CentOS Linux release 7.5.1804 and my session remains in faulty
> state. On gluster 3.12 version we can configure the following command to
> solve the problem.
>
> gluster vol geo-replication mastervol geoaccount at servere::slavevol config
> access_mount true
>
> Do you know whether there is any other command on 3.10.12 ?
>
> Here are the geo-replication logs
>
> [2018-10-24 17:54:09.613987] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.838430] I [cli.c:759:main] 0-cli: Started running
> /usr/sbin/gluster with version 3.10.12
> [2018-10-24 17:54:09.614087] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.838471] I [cli.c:642:cli_rpc_init] 0-cli: Connecting to remote
> glusterd at localhost
> [2018-10-24 17:54:09.614211] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.845996] I [socket.c:4208:socket_init] 0-glusterfs: SSL support for
> glusterd is ENABLED
> [2018-10-24 17:54:09.614345] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.846805] E [socket.c:4288:socket_init] 0-glusterfs: failed to open
> /etc/ssl/dhparam.pem, DH ciphers are disabled
> [2018-10-24 17:54:09.614475] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.864811] I [socket.c:348:ssl_setup_connection] 0-glusterfs: peer CN
> = servere
> [2018-10-24 17:54:09.614582] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.865488] I [socket.c:351:ssl_setup_connection] 0-glusterfs: SSL
> verification succeeded (client: )
> [2018-10-24 17:54:09.614722] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.865676] I [socket.c:4208:socket_init] 0-glusterfs: SSL support for
> glusterd is ENABLED
> [2018-10-24 17:54:09.614826] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:08.865807] E [socket.c:4288:socket_init] 0-glusterfs: failed to open
> /etc/ssl/dhparam.pem, DH ciphers are disabled
> [2018-10-24 17:54:09.614919] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:09.066460] I [MSGID: 101190]
> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-10-24 17:54:09.615006] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:09.067076] I [socket.c:2426:socket_event_handler] 0-transport:
> EPOLLERR - disconnecting now
> [2018-10-24 17:54:09.615093] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:09.067893] I [cli-rpc-ops.c:7024:gf_cli_getwd_cbk] 0-cli: Received
> resp to getwd
> [2018-10-24 17:54:09.615226] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:09.067953] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2018-10-24 17:54:09.615494] I
> [syncdutils(/bricks/brick-a1/brick):238:finalize] <top>: exiting.
> [2018-10-24 17:54:09.616787] I
> [repce(/bricks/brick-a1/brick):92:service_loop] RepceServer: terminating on
> reaching EOF.
> [2018-10-24 17:54:09.617005] I
> [syncdutils(/bricks/brick-a1/brick):238:finalize] <top>: exiting.
> [2018-10-24 17:54:09.617331] I [monitor(monitor):347:monitor] Monitor:
> worker(/bricks/brick-a1/brick) died before establishing connection
> [2018-10-24 17:54:19.811722] I [monitor(monitor):275:monitor] Monitor:
> starting gsyncd worker(/bricks/brick-a1/brick). Slave node:
> ssh://geoaccount@servere:gluster://localhost:slavevol
> [2018-10-24 17:54:20.90926] I
> [changelogagent(/bricks/brick-a1/brick):73:__init__] ChangelogAgent: Agent
> listining...
> [2018-10-24 17:54:21.431653] E
> [syncdutils(/bricks/brick-a1/brick):270:log_raise_exception] <top>:
> connection to peer is broken
> [2018-10-24 17:54:21.432003] E
> [resource(/bricks/brick-a1/brick):234:errlog] Popen: command "ssh
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-rJaCZW/05b8d7b5dab75575689c0e1a2ec33b3f.sock
> geoaccount at servere /nonexistent/gsyncd --session-owner
> 4d94d1ea-6818-450a-8fa8-645a7d9d36b8 --local-id
> .%2Fbricks%2Fbrick-a1%2Fbrick --local-node servera -N --listen --timeout
> 120 gluster://localhost:slavevol" returned with 1, saying:
> [2018-10-24 17:54:21.432122] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.609121] I [cli.c:759:main] 0-cli: Started running
> /usr/sbin/gluster with version 3.10.12
> [2018-10-24 17:54:21.432220] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.609156] I [cli.c:642:cli_rpc_init] 0-cli: Connecting to remote
> glusterd at localhost
> [2018-10-24 17:54:21.432312] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.615402] I [socket.c:4208:socket_init] 0-glusterfs: SSL support for
> glusterd is ENABLED
> [2018-10-24 17:54:21.432401] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.616145] E [socket.c:4288:socket_init] 0-glusterfs: failed to open
> /etc/ssl/dhparam.pem, DH ciphers are disabled
> [2018-10-24 17:54:21.432574] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.630890] I [socket.c:348:ssl_setup_connection] 0-glusterfs: peer CN
> = servere
> [2018-10-24 17:54:21.432727] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.630904] I [socket.c:351:ssl_setup_connection] 0-glusterfs: SSL
> verification succeeded (client: )
> [2018-10-24 17:54:21.432858] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.631159] I [socket.c:4208:socket_init] 0-glusterfs: SSL support for
> glusterd is ENABLED
> [2018-10-24 17:54:21.432995] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.631706] E [socket.c:4288:socket_init] 0-glusterfs: failed to open
> /etc/ssl/dhparam.pem, DH ciphers are disabled
> [2018-10-24 17:54:21.433123] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.892938] I [MSGID: 101190]
> [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2018-10-24 17:54:21.433243] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.894077] I [socket.c:2426:socket_event_handler] 0-transport:
> EPOLLERR - disconnecting now
> [2018-10-24 17:54:21.433363] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.895917] I [cli-rpc-ops.c:7024:gf_cli_getwd_cbk] 0-cli: Received
> resp to getwd
> [2018-10-24 17:54:21.433572] E
> [resource(/bricks/brick-a1/brick):238:logerr] Popen: ssh> [2018-10-24
> 17:54:20.896049] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2018-10-24 17:54:21.433921] I
> [syncdutils(/bricks/brick-a1/brick):238:finalize] <top>: exiting.
> [2018-10-24 17:54:21.435656] I
> [repce(/bricks/brick-a1/brick):92:service_loop] RepceServer: terminating on
> reaching EOF.
> [2018-10-24 17:54:21.435909] I
> [syncdutils(/bricks/brick-a1/brick):238:finalize] <top>: exiting.
> [2018-10-24 17:54:21.436253] I [monitor(monitor):347:monitor] Monitor:
> worker(/bricks/brick-a1/brick) died before establishing connection
>
> Any idea how to solve it?
>
> Thanks in advance!
>
> Best regards
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181026/be8f6bfd/attachment.html>


More information about the Gluster-users mailing list