<div dir="ltr">Hello - I am having a problem with geo-replication on glusterv5 that I hope someone can help me with. <br><br>I
 have a 7-server distribute cluster as the primary volume, and a 2 
server distribute cluster as the secondary volume. Both are running the 
same version of gluster on CentOS 7: glusterfs-5.3-2.el7.x86_64<br><br>I
 was able to setup the replication keys, user, groups, etc and establish
 the session, but it goes faulty quickly after initializing. <br><br>I ran into the missing libgfchangelog.so error and fixed with a symlink: <br><br><span style="font-family:courier new,monospace">[root@pcic-backup01 ~]# ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so<br>[root@pcic-backup01 ~]# ls -lh /usr/lib64/libgfchangelog.so*<br>lrwxrwxrwx. 1 root root  30 May 16 13:16 /usr/lib64/libgfchangelog.so -&gt; /usr/lib64/libgfchangelog.so.0<br>lrwxrwxrwx. 1 root root  23 May 16 08:58 /usr/lib64/libgfchangelog.so.0 -&gt; libgfchangelog.so.0.0.1<br>-rwxr-xr-x. 1 root root 62K Feb 25 04:02 /usr/lib64/libgfchangelog.so.0.0.1</span><br><br><br>But right now, when trying to start replication it goes faulty: <br><br><span style="font-family:courier new,monospace">[root@gluster01 ~]# gluster volume geo-replication storage geoaccount@10.0.231.81::pcic-backup start<br>Starting geo-replication session between storage &amp; geoaccount@10.0.231.81::pcic-backup has been successful<br>[root@gluster01 ~]# gluster volume geo-replication status<br> <br>MASTER
 NODE    MASTER VOL    MASTER BRICK                  SLAVE USER    SLAVE
                                        SLAVE NODE    STATUS            
 CRAWL STATUS    LAST_SYNCED          <br>---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>10.0.231.50    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.54    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.56    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.52    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.55    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.51    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>10.0.231.53    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Initializing...    N/A             N/A                  <br>[root@gluster01 ~]# gluster volume geo-replication status<br> <br>MASTER
 NODE    MASTER VOL    MASTER BRICK                  SLAVE USER    SLAVE
                                        SLAVE NODE    STATUS    CRAWL 
STATUS    LAST_SYNCED          <br>------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>10.0.231.50    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.54    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.56    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.55    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.53    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.51    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>10.0.231.52    storage       /mnt/raid6-storage/storage    geoaccount    ssh://geoaccount@10.0.231.81::pcic-backup    N/A           Faulty    N/A             N/A                  <br>[root@gluster01 ~]# gluster volume geo-replication storage geoaccount@10.0.231.81::pcic-backup stop<br>Stopping geo-replication session between storage &amp; geoaccount@10.0.231.81::pcic-backup has been successful</span><br><br><br>And the /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.log log file contains the error: GLUSTER: Changelog register failed        error=[Errno 21] Is a directory<br><br><span style="font-family:courier new,monospace">[root@gluster01 ~]# cat /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.log <br>[2019-05-23 17:07:23.500781] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:23.629298] I [gsyncd(status):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:31.354005] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:31.483582] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:31.863888] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:31.994895] I [gsyncd(monitor):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:33.133888] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change        status=Initializing...<br>[2019-05-23 17:07:33.134301] I [monitor(monitor):157:monitor] Monitor: starting gsyncd worker        brick=/mnt/raid6-storage/storage        slave_node=10.0.231.81<br>[2019-05-23 17:07:33.214462] I [gsyncd(agent /mnt/raid6-storage/storage):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:33.216737] I [changelogagent(agent /mnt/raid6-storage/storage):72:__init__] ChangelogAgent: Agent listining...<br>[2019-05-23 17:07:33.228072] I [gsyncd(worker /mnt/raid6-storage/storage):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:33.247236] I [resource(worker /mnt/raid6-storage/storage):1366:connect_remote] SSH: Initializing SSH connection between master and slave...<br>[2019-05-23 17:07:34.948796] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:35.73339] I [gsyncd(status):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:35.232405] I [resource(worker /mnt/raid6-storage/storage):1413:connect_remote] SSH: SSH connection between master and slave established.        duration=1.9849<br>[2019-05-23 17:07:35.232748] I [resource(worker /mnt/raid6-storage/storage):1085:connect] GLUSTER: Mounting gluster volume locally...<br>[2019-05-23 17:07:36.359250] I [resource(worker /mnt/raid6-storage/storage):1108:connect] GLUSTER: Mounted gluster volume        duration=1.1262<br>[2019-05-23 17:07:36.359639] I [subcmds(worker /mnt/raid6-storage/storage):80:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor<br>[2019-05-23 17:07:36.380975] E [repce(agent /mnt/raid6-storage/storage):122:worker] &lt;top&gt;: call failed: <br>Traceback (most recent call last):<br>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/repce.py&quot;, line 118, in worker<br>    res = getattr(self.obj, rmeth)(*in_data[2:])<br>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py&quot;, line 40, in register<br>    return Changes.cl_register(cl_brick, cl_dir, cl_log, cl_level, retries)<br>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py&quot;, line 45, in cl_register<br>    cls.raise_changelog_err()<br>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py&quot;, line 29, in raise_changelog_err<br>    raise ChangelogException(errn, os.strerror(errn))<br>ChangelogException: [Errno 21] Is a directory<br>[2019-05-23 17:07:36.382556] E [repce(worker /mnt/raid6-storage/storage):214:__call__] RepceClient: call failed        call=27412:140659114579776:1558631256.38        method=register        error=ChangelogException<br>[2019-05-23 17:07:36.382833] E [resource(worker /mnt/raid6-storage/storage):1266:service_loop] GLUSTER: Changelog register failed        error=[Errno 21] Is a directory<br>[2019-05-23 17:07:36.404313] I [repce(agent /mnt/raid6-storage/storage):97:service_loop] RepceServer: terminating on reaching EOF.<br>[2019-05-23 17:07:37.361396] I [monitor(monitor):278:monitor] Monitor: worker died in startup phase        brick=/mnt/raid6-storage/storage<br>[2019-05-23 17:07:37.370690] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change        status=Faulty<br>[2019-05-23 17:07:41.526408] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:41.643923] I [gsyncd(status):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:45.722193] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:45.817210] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:46.188499] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:46.258817] I [gsyncd(config-get):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:47.350276] I [gsyncd(monitor-status):308:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf<br>[2019-05-23 17:07:47.364751] I [subcmds(monitor-status):29:subcmd_monitor_status] &lt;top&gt;: Monitor Status Change        status=Stopped</span><br><br><br>I&#39;m not really sure where to go from here...<br><br><span style="font-family:courier new,monospace">[root@gluster01 ~]# gluster volume geo-replication storage geoaccount@10.0.231.81::pcic-backup config  | grep -i changelog<br>change_detector:changelog<br>changelog_archive_format:%Y%m<br>changelog_batch_size:727040<br>changelog_log_file:/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/changes-${local_id}.log<br>changelog_log_level:INFO</span><br><br>Thanks,<br> -Matthew</div>