<div dir="ltr"><div><div><div><div>Hi Marcus,<br><br></div>I am testing out 4.1 myself and I will have some update today.<br></div>For this particular traceback, gsyncd is not able to find the library.<br></div>Is it the rpm install? If so, gluster libraries would be in /usr/lib.</div><div>Please run the cmd below.</div><div><br></div><div>#ldconfig /usr/lib</div><div>#ldconfig -p /usr/lib | grep libgf (This should list libgfchangelog.so)</div><div><br></div><div>Geo-rep should be fixed automatically.<br></div><div><br></div><div>Thanks,<br></div><div>Kotresh HR<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 18, 2018 at 1:27 AM, Marcus Pedersén <span dir="ltr"><<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr" style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi again,</p>
<p>I continue to do some testing, but now I have come to a stage where I need help.</p>
<p><br>
</p>
<p>gsyncd.log was complaining about that /usr/local/sbin/gluster was missing so I made a link.</p>
<p>After that /usr/local/sbin/glusterfs was missing so I made a link there as well.</p>
<p>Both links were done on all slave nodes.</p>
<p><br>
</p>
<p>Now I have a new error that I can not resolve myself.</p>
<p>It can not open libgfchangelog.so<br>
</p>
<p><br>
</p>
<p>Many thanks!</p>
<p>Regards</p>
<p>Marcus Pedersén</p>
<p><br>
</p>
<p>Part of gsyncd.log:</p>
<p>OSError: libgfchangelog.so: cannot open shared object file: No such file or directory<br>
[2018-07-17 19:32:06.517106] I [repce(agent /urd-gds/gluster):89:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-17 19:32:07.479553] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster<br>
[2018-07-17 19:32:17.500709] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000<br>
[2018-07-17 19:32:17.541547] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-17 19:32:17.541959] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-17 19:32:17.542363] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-07-17 19:32:17.550894] I [resource(worker /urd-gds/gluster):1348:<wbr>connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-07-17 19:32:19.166246] I [resource(worker /urd-gds/gluster):1395:<wbr>connect_remote] SSH: SSH connection between master and slave established. duration=1.6151<br>
[2018-07-17 19:32:19.166806] I [resource(worker /urd-gds/gluster):1067:<wbr>connect] GLUSTER: Mounting gluster volume locally...<br>
[2018-07-17 19:32:20.257344] I [resource(worker /urd-gds/gluster):1090:<wbr>connect] GLUSTER: Mounted gluster volume duration=1.0901<br>
[2018-07-17 19:32:20.257921] I [subcmds(worker /urd-gds/gluster):70:subcmd_<wbr>worker] <top>: Worker spawn successful. Acknowledging back to monitor<br>
[2018-07-17 19:32:20.274647] E [repce(agent /urd-gds/gluster):114:worker] <top>: call failed:<span class=""><br>
Traceback (most recent call last):<br></span>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 110, in worker<br>
res = getattr(self.obj, rmeth)(*in_data[2:])<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/<wbr>changelogagent.py", line 37, in init<br>
return Changes.cl_init()<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/<wbr>changelogagent.py", line 21, in __getattr__<br>
from libgfchangelog import Changes as LChanges<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/<wbr>libgfchangelog.py", line 17, in <module><br>
class Changes(object):<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/<wbr>libgfchangelog.py", line 19, in Changes<br>
use_errno=True)<br>
File "/usr/lib64/python2.7/ctypes/_<wbr>_init__.py", line 360, in __init__<br>
self._handle = _dlopen(self._name, mode)<br>
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory<br>
[2018-07-17 19:32:20.275093] E [repce(worker /urd-gds/gluster):206:__call__<wbr>] RepceClient: call failed call=6078:139982918485824:<wbr>1531855940.27 method=init error=OSError<br>
[2018-07-17 19:32:20.275192] E [syncdutils(worker /urd-gds/gluster):330:log_<wbr>raise_exception] <top>: FAIL:<span class=""><br>
Traceback (most recent call last):<br></span>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/gsyncd.py", line 311, in main<br>
func(args)<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/subcmds.py", line 72, in subcmd_worker<br>
local.service_loop(remote)<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/resource.py"<wbr>, line 1236, in service_loop<br>
changelog_agent.init()<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 225, in __call__<br>
return self.ins(self.meth, *a)<br>
File "/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py", line 207, in __call__<br>
raise res<br>
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory<br>
[2018-07-17 19:32:20.286787] I [repce(agent /urd-gds/gluster):89:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-17 19:32:21.259891] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/urd-gds/gluster</p>
<p><br>
</p>
<p><br>
</p>
<div dir="ltr" style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:Calibri,Arial,Helvetica,sans-serif">
<hr style="display:inline-block;width:98%">
<div id="m_-2744748981202078584divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" color="#000000" face="Calibri, sans-serif"><span class=""><b>Från:</b> <a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.<wbr>org</a> <<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<wbr>gluster.org</a>> för Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>><br>
</span><b>Skickat:</b> den 16 juli 2018 21:59<br>
<b>Till:</b> <a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a><div><div class="h5"><br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work</div></div></font>
<div> </div>
</div><div><div class="h5">
<div>
<p>Hi Kotresh,</p>
<p>I have been testing for a bit and as you can see from the logs I sent before permission is denied for geouser on slave node on file:
</p>
<p>/var/log/glusterfs/cli.log</p>
<p>I have turned selinux off and just for testing I changed permissions on /var/log/glusterfs/cli.log so geouser can access it.</p>
<p>Starting geo-replication after that gives response successful but all nodes get status Faulty.</p>
<p><br>
</p>
<p>If I run: gluster-mountbroker status<br>
</p>
<p>I get:</p>
<p>+-----------------------------<wbr>+-------------+---------------<wbr>------------+--------------+--<wbr>------------------------+<br>
| NODE | NODE STATUS | MOUNT ROOT | GROUP | USERS |<br>
+-----------------------------<wbr>+-------------+---------------<wbr>------------+--------------+--<wbr>------------------------+<br>
| <a href="http://urd-gds-geo-001.hgen.slu.se" target="_blank">urd-gds-geo-001.hgen.slu.se</a> | UP | /var/mountbroker-root(OK) | geogroup(OK) | geouser(urd-gds-volume) |<br>
| urd-gds-geo-002 | UP | /var/mountbroker-root(OK) | geogroup(OK) | geouser(urd-gds-volume) |<br>
| localhost | UP | /var/mountbroker-root(OK) | geogroup(OK) | geouser(urd-gds-volume) |<br>
+-----------------------------<wbr>+-------------+---------------<wbr>------------+--------------+--<wbr>------------------------+<br>
</p>
<p><br>
</p>
<p>and that is all nodes on slave cluster, so mountbroker seems ok.</p>
<p><br>
</p>
<p>gsyncd.log logs an error about /usr/local/sbin/gluster is missing.</p>
<p>That is correct cos gluster is in /sbin/gluster and /urs/sbin/gluster</p>
<p>Another error is that SSH between master and slave is broken, </p>
<p>but now when I have changed permission on /var/log/glusterfs/cli.log I can run:</p>
<p>ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<wbr>replication/secret.pem -p 22 geouser@urd-gds-geo-001 gluster --xml --remote-host=localhost volume info urd-gds-volume</p>
<p>as geouser and that works, which means that the ssh connection works.<br>
</p>
<p><br>
</p>
<p>Is the permissions on /var/log/glusterfs/cli.log changed when geo-replication is setup?<br>
</p>
<p>Is gluster supposed to be in /usr/local/sbin/gluster?</p>
<p><br>
</p>
<p>Do I have any options or should I remove current geo-replication and create a new?</p>
<p>How much do I need to clean up before creating a new geo-replication?<br>
</p>
<p>In that case can I pause geo-replication, mount slave cluster on master cluster and run rsync , just to speed up transfer of files?</p>
<p><br>
</p>
<p>Many thanks in advance!</p>
<p>Marcus Pedersén<br>
</p>
<p><br>
</p>
<p>Part from the gsyncd.log:</p>
<p>[2018-07-16 19:34:56.26287] E [syncdutils(worker /urd-gds/gluster):749:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<wbr>replicatio\<br>
n/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-WrbZ22/<wbr>bf60c68f1a195dad59573a8dbaa309<wbr>f2.sock geouser@urd-gds-geo-001 /nonexistent/gsyncd slave urd-gds-volume geouser@urd-gds-geo-001::urd-<wbr>gds-volu\<br>
me --master-node urd-gds-001 --master-node-id 912bebfd-1a7f-44dc-b0b7-<wbr>f001a20d58cd --master-brick /urd-gds/gluster --local-node urd-gds-geo-000 --local-node-id 03075698-2bbf-43e4-a99a-<wbr>65fe82f61794 --slave-timeo\<br>
ut 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=1<br>
[2018-07-16 19:34:56.26583] E [syncdutils(worker /urd-gds/gluster):753:logerr] Popen: ssh> failure: execution of "/usr/local/sbin/gluster" failed with ENOENT (No such file or directory)<br>
[2018-07-16 19:34:56.33901] I [repce(agent /urd-gds/gluster):89:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-16 19:34:56.34307] I [monitor(monitor):262:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster<br>
[2018-07-16 19:35:06.59412] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000<br>
[2018-07-16 19:35:06.99509] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-16 19:35:06.99561] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-16 19:35:06.100481] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-07-16 19:35:06.108834] I [resource(worker /urd-gds/gluster):1348:<wbr>connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-07-16 19:35:06.762320] E [syncdutils(worker /urd-gds/gluster):303:log_<wbr>raise_exception] <top>: connection to peer is broken<br>
[2018-07-16 19:35:06.763103] E [syncdutils(worker /urd-gds/gluster):749:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<wbr>replicatio\<br>
n/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-K9mB6Q/<wbr>bf60c68f1a195dad59573a8dbaa309<wbr>f2.sock geouser@urd-gds-geo-001 /nonexistent/gsyncd slave urd-gds-volume geouser@urd-gds-geo-001::urd-<wbr>gds-volu\<br>
me --master-node urd-gds-001 --master-node-id 912bebfd-1a7f-44dc-b0b7-<wbr>f001a20d58cd --master-brick /urd-gds/gluster --local-node urd-gds-geo-000 --local-node-id 03075698-2bbf-43e4-a99a-<wbr>65fe82f61794 --slave-timeo\<br>
ut 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=1<br>
[2018-07-16 19:35:06.763398] E [syncdutils(worker /urd-gds/gluster):753:logerr] Popen: ssh> failure: execution of "/usr/local/sbin/gluster" failed with ENOENT (No such file or directory)<br>
[2018-07-16 19:35:06.771905] I [repce(agent /urd-gds/gluster):89:service_<wbr>loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-16 19:35:06.772272] I [monitor(monitor):262:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster<br>
[2018-07-16 19:35:16.786387] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000<br>
[2018-07-16 19:35:16.828056] I [gsyncd(worker /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-16 19:35:16.828066] I [gsyncd(agent /urd-gds/gluster):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-<wbr>replication/urd-gds-volume_<wbr>urd-gds-geo-001_urd-gds-<wbr>volume/gsyncd.conf<br>
[2018-07-16 19:35:16.828912] I [changelogagent(agent /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining...<br>
[2018-07-16 19:35:16.837100] I [resource(worker /urd-gds/gluster):1348:<wbr>connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-07-16 19:35:17.260257] E [syncdutils(worker /urd-gds/gluster):303:log_<wbr>raise_exception] <top>: connection to peer is broken<br>
<br>
</p>
<div style="color:rgb(33,33,33)">
<hr style="display:inline-block;width:98%">
<div id="m_-2744748981202078584divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" color="#000000" face="Calibri, sans-serif"><b>Från:</b> <a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.<wbr>org</a> <<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<wbr>gluster.org</a>> för Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>><br>
<b>Skickat:</b> den 13 juli 2018 14:50<br>
<b>Till:</b> Kotresh Hiremath Ravishankar<br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work</font>
<div> </div>
</div>
<div>
<div dir="auto">
<div>Hi <span style="font-family:sans-serif">Kotresh,</span></div>
<div dir="auto"><font face="sans-serif">Yes, all nodes have the same version 4.1.1 both master and slave.</font></div>
<div dir="auto"><font face="sans-serif">All glusterd are crashing on the master side.</font></div>
<div dir="auto"><font face="sans-serif">Will send logs tonight. <br>
</font><br>
Thanks,</div>
<div dir="auto">Marcus <br>
<br>
<div dir="auto">################<br>
Marcus Pedersén<br>
Systemadministrator <br>
Interbull Centre<br>
################<br>
Sent from my phone <br>
################</div>
<div class="gmail_extra" dir="auto"><br>
<div class="gmail_quote">Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>>:<br type="attribution">
<blockquote class="m_-2744748981202078584quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div dir="ltr">
<div>
<div>
<div>Hi Marcus,<br>
<br>
</div>
Is the gluster geo-rep version is same on both master and slave?<br>
<br>
</div>
Thanks,<br>
</div>
Kotresh HR<br>
</div>
<div><br>
<div class="m_-2744748981202078584elided-text">On Fri, Jul 13, 2018 at 1:26 AM, Marcus Pedersén <span dir="ltr">
<<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>></span> wrote:<br>
<blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr" style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:'calibri','arial','helvetica',sans-serif">
<p>Hi<span style="font-family:sans-serif"> Kotresh,<br>
</span></p>
<p><span style="font-family:sans-serif">i have replaced both files (<a href="https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/gsyncdconfig.py" target="_blank">gsyncdconfig.py</a> and
<a href="https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/repce.py" target="_blank">
repce.py</a>) in all nodes both master and slave.<br>
</span></p>
<p><span style="font-family:sans-serif">I rebooted all servers but geo-replication status is still Stopped.<br>
</span></p>
<p><span style="font-family:sans-serif">I tried to start geo-replication with response Successful but status still show Stopped on all nodes.<br>
</span></p>
<p><span style="font-family:sans-serif">Nothing has been written to geo-replication logs since I sent the tail of the log.<br>
</span></p>
<p><span style="font-family:sans-serif">So I do not know what info to provide?<br>
</span></p>
<p><span style="font-family:sans-serif"><br>
</span></p>
<p><span style="font-family:sans-serif">Please, help me to find a way to solve this.<br>
</span></p>
<p><span style="font-family:sans-serif"><br>
</span></p>
<p><span style="font-family:sans-serif">Thanks!<br>
</span></p>
<p><span style="font-family:sans-serif"><br>
</span></p>
<p><span style="font-family:sans-serif">Regards<br>
</span></p>
<p><span style="font-family:sans-serif">Marcus<br>
</span></p>
<p><br>
</p>
<div style="color:rgb(33,33,33)">
<hr style="display:inline-block;width:98%">
<div dir="ltr"><font style="font-size:11pt" color="#000000" face="Calibri, sans-serif"><b>Från:</b>
<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.<wbr>org</a> <<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster<wbr>.org</a>> för Marcus Pedersén <<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>><br>
<b>Skickat:</b> den 12 juli 2018 08:51<br>
<b>Till:</b> Kotresh Hiremath Ravishankar<br>
<b>Kopia:</b> <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
<b>Ämne:</b> Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work</font>
<div> </div>
</div>
<div>
<div>
<div>
<div dir="auto">
<div>Thanks <span style="font-family:sans-serif">Kotresh,</span></div>
<div dir="auto"><font face="sans-serif">I installed through the official centos channel, centos-release-gluster41.</font></div>
<div dir="auto"><font face="sans-serif">Isn't this fix included in centos install?</font></div>
<div dir="auto"><font face="sans-serif">I will have a look, test it tonight and come back to you!</font></div>
<div dir="auto"><font face="sans-serif"><br>
</font></div>
<div dir="auto"><font face="sans-serif">Thanks a lot!</font></div>
<div dir="auto"><font face="sans-serif"><br>
</font></div>
<div dir="auto"><font face="sans-serif">Regards</font></div>
<div dir="auto"><font face="sans-serif">Marcus<br>
</font><br>
<div dir="auto">################<br>
Marcus Pedersén<br>
Systemadministrator <br>
Interbull Centre<br>
################<br>
Sent from my phone <br>
################</div>
<div dir="auto"><br>
<div class="m_-2744748981202078584elided-text">Den 12 juli 2018 07:41 skrev Kotresh Hiremath Ravishankar <<a href="mailto:khiremat@redhat.com" target="_blank">khiremat@redhat.com</a>>:<br type="attribution">
<blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div dir="ltr">
<div>
<div>Hi Marcus,<br>
<br>
</div>
I think the fix [1] is needed in 4.1<br>
</div>
Could you please this out and let us know if that works for you?<br>
<div><br>
[1] <a href="https://review.gluster.org/#/c/20207/" target="_blank">https://review.gluster.org/#/c<wbr>/20207/</a></div>
<div><br>
</div>
<div>Thanks,<br>
</div>
<div>Kotresh HR<br>
</div>
</div>
<div><br>
<div>On Thu, Jul 12, 2018 at 1:49 AM, Marcus Pedersén <span dir="ltr"><<a href="mailto:marcus.pedersen@slu.se" target="_blank">marcus.pedersen@slu.se</a>></span> wrote:<br>
<blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr" style="font-size:12pt;color:#000000;background-color:#ffffff;font-family:'calibri','arial','helvetica',sans-serif">
<p>Hi all,</p>
<p>I have upgraded from 3.12.9 to 4.1.1 and been following upgrade instructions for offline upgrade.</p>
<p>I upgraded geo-replication side first 1 x (2+1) and the master side after that 2 x (2+1).</p>
<p>Both clusters works the way they should on their own.</p>
<p>After upgrade on master side status <wbr>for all geo-replication nodes <wbr>is Stopped.</p>
<p>I tried to start the geo-replication from master node and response back was started successfully.</p>
<p>Status again .... Stopped</p>
<p>Tried to start again and get response started successfully, after that all glusterd crashed on all master nodes.</p>
<p>After a restart of all glusterd the master cluster was up again.</p>
<p>Status for geo-replication is still Stopped and every try to start it after this gives the response successful but still status Stopped.</p>
<p><br>
</p>
<p>Please help me get the geo-replication up and running again.</p>
<p><br>
</p>
<p>Best regards</p>
<p>Marcus Pedersén<br>
</p>
<p><br>
</p>
<p>Part of geo-replication log from master node:</p>
<p>[2018-07-11 18:42:48.941760] I [changelogagent(/urd-gds/glust<wbr>er):73:__init__] ChangelogAgent: Agent listining...<br>
[2018-07-11 18:42:48.947567] I [resource(/urd-gds/gluster):17<wbr>80:connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-07-11 18:42:49.363514] E [syncdutils(/urd-gds/gluster):<wbr>304:log_raise_exception] <top>: connection to peer is broken<br>
[2018-07-11 18:42:49.364279] E [resource(/urd-gds/gluster):21<wbr>0:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/secret\<br>
.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-hjRhBo/7e5<wbr>534547f3675a710a107722317484f.<wbr>sock geouser@urd-gds-geo-000 /nonexistent/gsyncd --session-owner 5e94eb7d-219f-4741-a179-d4ae6b<wbr>50c7ee --local-id .%\<br>
2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120 gluster://localhost:urd-gds-vo<wbr>lume error=2<br>
[2018-07-11 18:42:49.364586] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> usage: gsyncd.py [-h]<br>
[2018-07-11 18:42:49.364799] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh><br>
[2018-07-11 18:42:49.364989] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> {monitor-status,monitor,worker<wbr>,agent,slave,status,config-che<wbr>ck,config-get,config-set,confi<wbr>g-reset,voluuidget,d\<br>
elete}<br>
[2018-07-11 18:42:49.365210] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> ...<br>
[2018-07-11 18:42:49.365408] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> gsyncd.py: error: argument subcmd: invalid choice: '5e94eb7d-219f-4741-a179-d4ae6<wbr>b50c7ee' (choose from 'monitor-status', 'monit\<br>
or', 'worker', 'agent', 'slave', 'status', 'config-check', 'config-get', 'config-set', 'config-reset', 'voluuidget', 'delete')<br>
[2018-07-11 18:42:49.365919] I [syncdutils(/urd-gds/gluster):<wbr>271:finalize] <top>: exiting.<br>
[2018-07-11 18:42:49.369316] I [repce(/urd-gds/gluster):92:se<wbr>rvice_loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-11 18:42:49.369921] I [syncdutils(/urd-gds/gluster):<wbr>271:finalize] <top>: exiting.<br>
[2018-07-11 18:42:49.369694] I [monitor(monitor):353:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster<br>
[2018-07-11 18:42:59.492762] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=ssh://geouser@urd-g<wbr>ds-geo-000:gluster://localhost<wbr>:urd-gds-volume<br>
[2018-07-11 18:42:59.558491] I [resource(/urd-gds/gluster):17<wbr>80:connect_remote] SSH: Initializing SSH connection between master and slave...<br>
[2018-07-11 18:42:59.559056] I [changelogagent(/urd-gds/glust<wbr>er):73:__init__] ChangelogAgent: Agent listining...<br>
[2018-07-11 18:42:59.945693] E [syncdutils(/urd-gds/gluster):<wbr>304:log_raise_exception] <top>: connection to peer is broken<br>
[2018-07-11 18:42:59.946439] E [resource(/urd-gds/gluster):21<wbr>0:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replicat<wbr>ion/secret\<br>
.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-992bk7/7e5<wbr>534547f3675a710a107722317484f.<wbr>sock geouser@urd-gds-geo-000 /nonexistent/gsyncd --session-owner 5e94eb7d-219f-4741-a179-d4ae6b<wbr>50c7ee --local-id .%\<br>
2Furd-gds%2Fgluster --local-node urd-gds-001 -N --listen --timeout 120 gluster://localhost:urd-gds-vo<wbr>lume error=2<br>
[2018-07-11 18:42:59.946748] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> usage: gsyncd.py [-h]<br>
[2018-07-11 18:42:59.946962] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh><br>
[2018-07-11 18:42:59.947150] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> {monitor-status,monitor,worker<wbr>,agent,slave,status,config-che<wbr>ck,config-get,config-set,confi<wbr>g-reset,voluuidget,d\<br>
elete}<br>
[2018-07-11 18:42:59.947369] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> ...<br>
[2018-07-11 18:42:59.947552] E [resource(/urd-gds/gluster):21<wbr>4:logerr] Popen: ssh> gsyncd.py: error: argument subcmd: invalid choice: '5e94eb7d-219f-4741-a179-d4ae6<wbr>b50c7ee' (choose from 'monitor-status', 'monit\<br>
or', 'worker', 'agent', 'slave', 'status', 'config-check', 'config-get', 'config-set', 'config-reset', 'voluuidget', 'delete')<br>
[2018-07-11 18:42:59.948046] I [syncdutils(/urd-gds/gluster):<wbr>271:finalize] <top>: exiting.<br>
[2018-07-11 18:42:59.951392] I [repce(/urd-gds/gluster):92:se<wbr>rvice_loop] RepceServer: terminating on reaching EOF.<br>
[2018-07-11 18:42:59.951760] I [syncdutils(/urd-gds/gluster):<wbr>271:finalize] <top>: exiting.<br>
[2018-07-11 18:42:59.951817] I [monitor(monitor):353:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster<br>
[2018-07-11 18:43:10.54580] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/urd-gds/gluster slave_node=ssh://geouser@urd-g<wbr>ds-geo-000:gluster://localhost<wbr>:urd-gds-volume<br>
[2018-07-11 18:43:10.88356] I [monitor(monitor):345:monitor] Monitor: Changelog Agent died, Aborting Worker brick=/urd-gds/gluster<br>
[2018-07-11 18:43:10.88613] I [monitor(monitor):353:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster<br>
[2018-07-11 18:43:20.112435] I [gsyncdstatus(monitor):242:set<wbr>_worker_status] GeorepStatus: Worker Status Change status=inconsistent<br>
[2018-07-11 18:43:20.112885] E [syncdutils(monitor):331:log_r<wbr>aise_exception] <top>: FAIL:<br>
Traceback (most recent call last):<br>
File "/usr/libexec/glusterfs/python<wbr>/syncdaemon/syncdutils.py", line 361, in twrap<br>
except:<br>
File "/usr/libexec/glusterfs/python<wbr>/syncdaemon/monitor.py", line 428, in wmon<br>
sys.exit()<br>
TypeError: 'int' object is not iterable<br>
[2018-07-11 18:43:20.114610] I [syncdutils(monitor):271:final<wbr>ize] <top>: exiting.<br>
</p>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mail<wbr>man/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>Thanks and Regards,<br>
</div>
Kotresh H R<br>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div>
</div>
</div>
</div>
<div>
<div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>Thanks and Regards,<br>
</div>
Kotresh H R<br>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div>
</div>
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div>
</div></div></div><div><div class="h5">
<p>---<br>
När du skickar e-post till SLU så innebär detta att SLU behandlar dina personuppgifter. För att läsa mer om hur detta går till, klicka
<a href="https://www.slu.se/om-slu/kontakta-slu/personuppgifter/" target="_blank">här </a><br>
E-mailing SLU will result in SLU processing your personal data. For more information on how this is done, click
<a href="https://www.slu.se/en/about-slu/contact-slu/personal-data/" target="_blank">here </a></p>
</div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Thanks and Regards,<br></div>Kotresh H R<br></div></div>
</div>