Have you verified all steps for creating the geo-replication ?<div><br></div><div id="yMail_cursorElementTracker_1615955616619">If yes , maybe using "reset-sync-time + delete + create" makes sense.Keep in mind that it will take a long time once the geo-rep is established again.</div><div id="yMail_cursorElementTracker_1615955729476"><br></div><div id="yMail_cursorElementTracker_1615955750689"><br></div><div id="yMail_cursorElementTracker_1615955750840">Best Regards,</div><div id="yMail_cursorElementTracker_1615955755206">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1615955730266"><br><blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Tue, Mar 16, 2021 at 22:34, Matthew Benstead</div><div>&lt;matthewb@uvic.ca&gt; wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv5774932813"><div>
    Thanks Strahil, <br clear="none">
    <br clear="none">
    I wanted to make sure the issue wasn't occurring because there were
    no new changes to sync from the master volume. So I created some
    files and restarted the sync, but it had no effect. <br clear="none">
    <br clear="none">
    <font face="monospace">[root@storage01 ~]# cd /storage2/home/test/<br clear="none">
      [root@storage01 test]# for nums in {1,2,3,4,5,6,7,8,9,0}; do touch
      $nums.txt; done<br clear="none">
      <br clear="none">
      [root@storage01 </font><font face="monospace"><font face="monospace">test</font>]# gluster volume geo-replication
      storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> start<br clear="none">
      Starting geo-replication session between storage &amp;
      <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful<br clear="none">
      [root@storage01 </font><font face="monospace"><font face="monospace">test</font>]# gluster volume geo-replication
      status <br clear="none">
      &nbsp;<br clear="none">
      MASTER NODE&nbsp;&nbsp;&nbsp; MASTER VOL&nbsp;&nbsp;&nbsp; MASTER BRICK&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE
      USER&nbsp;&nbsp;&nbsp; SLAVE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE NODE&nbsp;&nbsp;&nbsp;
      STATUS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; CRAWL STATUS&nbsp;&nbsp;&nbsp; LAST_SYNCED&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      [root@storage01 </font><font face="monospace"><font face="monospace">test</font>]# gluster volume geo-replication
      status <br clear="none">
      &nbsp;<br clear="none">
      MASTER NODE&nbsp;&nbsp;&nbsp; MASTER VOL&nbsp;&nbsp;&nbsp; MASTER BRICK&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE
      USER&nbsp;&nbsp;&nbsp; SLAVE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE NODE&nbsp;&nbsp;&nbsp;
      STATUS&nbsp;&nbsp;&nbsp; CRAWL STATUS&nbsp;&nbsp;&nbsp; LAST_SYNCED&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
---------------------------------------------------------------------------------------------------------------------------------------------------------------------<br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_b/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_a/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /data/storage_c/storage&nbsp;&nbsp;&nbsp;
      geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp;
      N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br clear="none">
      [root@storage01 </font><font face="monospace"><font face="monospace">test</font>]# gluster volume geo-replication
      storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> stop<br clear="none">
      Stopping geo-replication session between storage &amp;
      <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful</font><br clear="none">
    <br clear="none">
    Still getting the same error about the history crawl failing: <br clear="none">
    <br clear="none">
    <font face="monospace">[2021-03-16 19:05:05.227677] I [MSGID:
      132035] [gf-history-changelog.c:837:gf_history_changelog]
      0-gfchangelog: Requesting historical changelogs
      [{start=1614666552}, {end=1615921505}] <br clear="none">
      [2021-03-16 19:05:05.227733] I [MSGID: 132019]
      [gf-history-changelog.c:755:gf_changelog_extract_min_max]
      0-gfchangelog: changelogs min max [{min=1597342860},
      {max=1615921502}, {total_changelogs=1300114}] <br clear="none">
      [2021-03-16 19:05:05.408567] E [MSGID: 132009]
      [gf-history-changelog.c:941:gf_history_changelog] 0-gfchangelog:
      wrong result [{for=end}, {start=1615921502}, {idx=1300113}] <br clear="none">
      <br clear="none">
      <br clear="none">
      [2021-03-16 19:05:05.228092] I [resource(worker
      /data/storage_c/storage):1292:service_loop] GLUSTER: Register time
      [{time=1615921505}]<br clear="none">
      [2021-03-16 19:05:05.228626] D [repce(worker
      /data/storage_c/storage):195:push] RepceClient: call
      124117:140500837320448:1615921505.23 keep_alive(None,) ...<br clear="none">
      [2021-03-16 19:05:05.230076] D [repce(worker
      /data/storage_c/storage):215:__call__] RepceClient: call
      124117:140500837320448:1615921505.23 keep_alive -&gt; 1<br clear="none">
      [2021-03-16 19:05:05.230693] D [master(worker
      /data/storage_c/storage):540:crawlwrap] _GMaster: primary master
      with volume id cf94a8f2-324b-40b3-bf72-c3766100ea99 ...<br clear="none">
      [2021-03-16 19:05:05.237607] I [gsyncdstatus(worker
      /data/storage_c/storage):281:set_active] GeorepStatus: Worker
      Status Change [{status=Active}]<br clear="none">
      [2021-03-16 19:05:05.242046] I [gsyncdstatus(worker
      /data/storage_c/storage):253:set_worker_crawl_status]
      GeorepStatus: Crawl Status Change [{status=History Crawl}]<br clear="none">
      [2021-03-16 19:05:05.242450] I [master(worker
      /data/storage_c/storage):1559:crawl] _GMaster: starting history
      crawl [{turns=1}, {stime=(1614666552, 0)},
      {entry_stime=(1614664108, 0)}, {etime=1615921505}]<br clear="none">
      [2021-03-16 19:05:05.244151] E [resource(worker
      /data/storage_c/storage):1312:service_loop] GLUSTER: Changelog
      History Crawl failed [{error=[Errno 0] Success}]<br clear="none">
      [2021-03-16 19:05:05.394129] E [resource(worker
      /data/storage_a/storage):1312:service_loop] GLUSTER: Changelog
      History Crawl failed [{error=[Errno 0] Success}]<br clear="none">
      [2021-03-16 19:05:05.408759] E [resource(worker
      /data/storage_b/storage):1312:service_loop] GLUSTER: Changelog
      History Crawl failed [{error=[Errno 0] Success}]<br clear="none">
      [2021-03-16 19:05:06.158694] I [monitor(monitor):228:monitor]
      Monitor: worker died in startup phase
      [{brick=/data/storage_a/storage}]<br clear="none">
      [2021-03-16 19:05:06.163052] I
      [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
      Status Change [{status=Faulty}]<br clear="none">
      [2021-03-16 19:05:06.204464] I [monitor(monitor):228:monitor]
      Monitor: worker died in startup phase
      [{brick=/data/storage_b/storage}]<br clear="none">
      [2021-03-16 19:05:06.208961] I
      [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
      Status Change [{status=Faulty}]<br clear="none">
      [2021-03-16 19:05:06.220495] I [monitor(monitor):228:monitor]
      Monitor: worker died in startup phase
      [{brick=/data/storage_c/storage}]<br clear="none">
      [2021-03-16 19:05:06.223947] I
      [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
      Status Change [{status=Faulty}]</font><br clear="none">
    <br clear="none">
    I confirmed NTP is working: <br clear="none">
    <br clear="none">
    <font face="monospace"><br clear="none">
      pcic-backup02 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      &nbsp;&nbsp;&nbsp;&nbsp; remote&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; refid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; st t when poll reach&nbsp;&nbsp; delay&nbsp;&nbsp;
      offset&nbsp; jitter<br clear="none">
==============================================================================<br clear="none">
      +s216-232-132-95 68.69.221.61&nbsp;&nbsp;&nbsp;&nbsp; 2 u&nbsp;&nbsp; 29 1024&nbsp; 377&nbsp;&nbsp; 24.141&nbsp;&nbsp;&nbsp;
      2.457&nbsp;&nbsp; 1.081<br clear="none">
      *yyz-1.ip.0xt.ca 206.108.0.131&nbsp;&nbsp;&nbsp; 2 u&nbsp; 257 1024&nbsp; 377&nbsp;&nbsp; 57.119&nbsp;&nbsp;
      -0.084&nbsp;&nbsp; 5.625<br clear="none">
      +ip102.ip-198-27 192.168.10.254&nbsp;&nbsp; 2 u&nbsp; 189 1024&nbsp; 377&nbsp;&nbsp; 64.227&nbsp;&nbsp;
      -3.012&nbsp;&nbsp; 8.867<br clear="none">
      <br clear="none">
      storage03 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      &nbsp;&nbsp;&nbsp;&nbsp; remote&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; refid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; st t when poll reach&nbsp;&nbsp; delay&nbsp;&nbsp;
      offset&nbsp; jitter<br clear="none">
==============================================================================<br clear="none">
      *198.161.203.36&nbsp; 128.233.150.93&nbsp;&nbsp; 2 u&nbsp;&nbsp; 36 1024&nbsp; 377&nbsp;&nbsp; 16.055&nbsp;&nbsp;
      -0.381&nbsp;&nbsp; 0.318<br clear="none">
      +s206-75-147-25. 192.168.10.254&nbsp;&nbsp; 2 u&nbsp; 528 1024&nbsp; 377&nbsp;&nbsp; 23.648&nbsp;&nbsp;
      -6.196&nbsp;&nbsp; 4.803<br clear="none">
      +time.cloudflare 10.69.8.80&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 3 u&nbsp; 121 1024&nbsp; 377&nbsp;&nbsp;&nbsp; 2.408&nbsp;&nbsp;&nbsp;
      0.507&nbsp;&nbsp; 0.791<br clear="none">
      <br clear="none">
      storage02 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      &nbsp;&nbsp;&nbsp;&nbsp; remote&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; refid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; st t when poll reach&nbsp;&nbsp; delay&nbsp;&nbsp;
      offset&nbsp; jitter<br clear="none">
==============================================================================<br clear="none">
      *198.161.203.36&nbsp; 128.233.150.93&nbsp;&nbsp; 2 u&nbsp; 918 1024&nbsp; 377&nbsp;&nbsp; 15.952&nbsp;&nbsp;&nbsp;
      0.226&nbsp;&nbsp; 0.197<br clear="none">
      +linuxgeneration 16.164.40.197&nbsp;&nbsp;&nbsp; 2 u&nbsp;&nbsp; 88 1024&nbsp; 377&nbsp;&nbsp; 62.692&nbsp;&nbsp;
      -1.160&nbsp;&nbsp; 2.007<br clear="none">
      +dns3.switch.ca&nbsp; 206.108.0.131&nbsp;&nbsp;&nbsp; 2 u&nbsp; 857 1024&nbsp; 377&nbsp;&nbsp; 27.315&nbsp;&nbsp;&nbsp;
      0.778&nbsp;&nbsp; 0.483<br clear="none">
      <br clear="none">
      storage01 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      &nbsp;&nbsp;&nbsp;&nbsp; remote&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; refid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; st t when poll reach&nbsp;&nbsp; delay&nbsp;&nbsp;
      offset&nbsp; jitter<br clear="none">
==============================================================================<br clear="none">
      +198.161.203.36&nbsp; 128.233.150.93&nbsp;&nbsp; 2 u&nbsp; 121 1024&nbsp; 377&nbsp;&nbsp; 16.069&nbsp;&nbsp;&nbsp;
      1.016&nbsp;&nbsp; 0.195<br clear="none">
      +zero.gotroot.ca 30.114.5.31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2 u&nbsp; 543 1024&nbsp; 377&nbsp;&nbsp;&nbsp; 5.106&nbsp;&nbsp;
      -2.462&nbsp;&nbsp; 4.923<br clear="none">
      *ntp3.torix.ca&nbsp;&nbsp; .PTP0.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 u&nbsp; 300 1024&nbsp; 377&nbsp;&nbsp; 54.010&nbsp;&nbsp;&nbsp;
      2.421&nbsp; 15.182<br clear="none">
      <br clear="none">
      pcic-backup01 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      &nbsp;&nbsp;&nbsp;&nbsp; remote&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; refid&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; st t when poll reach&nbsp;&nbsp; delay&nbsp;&nbsp;
      offset&nbsp; jitter<br clear="none">
==============================================================================<br clear="none">
      *dns3.switch.ca&nbsp; 206.108.0.131&nbsp;&nbsp;&nbsp; 2 u&nbsp; 983 1024&nbsp; 377&nbsp;&nbsp; 26.990&nbsp;&nbsp;&nbsp;
      0.523&nbsp;&nbsp; 1.389<br clear="none">
      +dns2.switch.ca&nbsp; 206.108.0.131&nbsp;&nbsp;&nbsp; 2 u&nbsp; 689 1024&nbsp; 377&nbsp;&nbsp; 26.975&nbsp;&nbsp;
      -0.257&nbsp;&nbsp; 0.467<br clear="none">
      +64.ip-54-39-23. 214.176.184.39&nbsp;&nbsp; 2 u&nbsp; 909 1024&nbsp; 377&nbsp;&nbsp; 64.262&nbsp;&nbsp;
      -0.604&nbsp;&nbsp; 6.129</font><br clear="none">
    <br clear="none">
    And everything is working on the same version of gluster: <br clear="none">
    <br clear="none">
    <font face="monospace">pcic-backup02 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      glusterfs 8.3<br clear="none">
      pcic-backup01 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      glusterfs 8.3<br clear="none">
      storage02 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      glusterfs 8.3<br clear="none">
      storage01 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      glusterfs 8.3<br clear="none">
      storage03 | CHANGED | rc=0 &gt;&gt;<br clear="none">
      glusterfs 8.3</font><br clear="none">
    <br clear="none">
    SSH works, and the backup user/group is configured with mountbroker:
    <br clear="none">
    <br clear="none">
    <font face="monospace">[root@storage01 ~]# ssh -i /root/.ssh/id_rsa
      <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81" target="_blank" href="mailto:geoaccount@10.0.231.81">geoaccount@10.0.231.81</a> uname -a<br clear="none">
      Linux pcic-backup01 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3
      15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux<br clear="none">
      [root@storage01 ~]# ssh -i /root/.ssh/id_rsa
      <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.82" target="_blank" href="mailto:geoaccount@10.0.231.82">geoaccount@10.0.231.82</a> uname -a<br clear="none">
      Linux pcic-backup02 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3
      15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux<br clear="none">
      <br clear="none">
      <br clear="none">
      [root@pcic-backup01 ~]# grep geo /etc/passwd<br clear="none">
      geoaccount:x:1000:1000::/home/geoaccount:/bin/bash<br clear="none">
      [root@pcic-backup01 ~]# grep geo /etc/group<br clear="none">
      geogroup:x:1000:geoaccount<br clear="none">
      geoaccount:x:1001:geoaccount<br clear="none">
      <br clear="none">
      [root@pcic-backup01 ~]# gluster-mountbroker status<br clear="none">
+-------------+-------------+---------------------------+--------------+--------------------------+<br clear="none">
      |&nbsp;&nbsp;&nbsp;&nbsp; NODE&nbsp;&nbsp;&nbsp; | NODE STATUS |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MOUNT ROOT&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;
      GROUP&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; USERS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |<br clear="none">
+-------------+-------------+---------------------------+--------------+--------------------------+<br clear="none">
      | 10.0.231.82 |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) |
      geogroup(OK) | geoaccount(pcic-backup)&nbsp; |<br clear="none">
      |&nbsp; localhost&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) |
      geogroup(OK) | geoaccount(pcic-backup)&nbsp; |<br clear="none">
+-------------+-------------+---------------------------+--------------+--------------------------+</font><br clear="none">
    <br clear="none">
    <br clear="none">
    <br clear="none">
    <br clear="none">
    So, then if I'm going to have to resync, what is the best way to do
    this? <br clear="none">
    <br clear="none">
    With delete or delete reset-sync-time ?&nbsp;&nbsp;
<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-starting_geo-replication#Deleting_a_Geo-replication_Session">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-starting_geo-replication#Deleting_a_Geo-replication_Session</a><br clear="none">
    <br clear="none">
    <br clear="none">
    <br clear="none">
    Erasing the index? So I don't have to transfer the files again that
    are already on the backup? <br clear="none">
    <ul><li><a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-troubleshooting_geo-replication#Synchronization_Is_Not_Complete">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/sect-troubleshooting_geo-replication#Synchronization_Is_Not_Complete</a>
      </li><li><a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Geo%20Replication/#best-practices">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Geo%20Replication/#best-practices</a><br clear="none">
      </li></ul>
    <br clear="none">
    <br clear="none">
    <br clear="none">
    Is it possible to use the special-sync-mode&nbsp; option from here:
<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-disaster_recovery">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-disaster_recovery</a><br clear="none">
    <br clear="none">
    <br clear="none">
    <br clear="none">
    Thoughts? <br clear="none">
    <br clear="none">
    Thanks,<br clear="none">
    &nbsp;-Matthew<br clear="none">
    --<br clear="none">
    <br clear="none">
    <div class="yiv5774932813yqt5997629920" id="yiv5774932813yqt93720"><div class="yiv5774932813moz-cite-prefix">On 3/12/21 3:31 PM, Strahil Nikolov
      wrote:<br clear="none">
    </div>
    <blockquote type="cite">
      <div>
        Notice: This message was sent from outside the University of
        Victoria email system. Please be cautious with links and
        sensitive information.
      </div>
      <br clear="none">
      <div>Usually, when I'm stuck - I just start over.
        <div id="yiv5774932813yMail_cursorElementTracker_1615591681347">For example,
          check the prerequisites:</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591695583">- Is ssh
          available (no firewall blocking)</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591712278">- Is time
          sync enabled (ntp/chrony)</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591726239">- Is DNS ok
          on all hosts (including PTR records)</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591754957">- Is the
          gluster version the same on all nodes (primary &amp;
          secondary)</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591754957"><br clear="none">
        </div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591782389">Then start
          over as if the geo rep was never existing. For example , stop
          it and start over with the secondary nodes's checks
          (mountbroker, user, group) .</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591782389"><br clear="none">
        </div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591841464">Most probably
          somwthing will come up and you will fix it.</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591841464"><br clear="none">
        </div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591864006">In worst case
          scenario, you will need to clean ip the geo-rep and start
          fresh.</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591864006"><br clear="none">
        </div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591889862"><br clear="none">
        </div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591889925">Best Regards,</div>
        <div id="yiv5774932813yMail_cursorElementTracker_1615591895692">Strahil
          Nikolov<br clear="none">
          <br clear="none">
          <blockquote>
            <div>
              <div>On Fri, Mar 12, 2021 at 20:01, Matthew Benstead</div>
              <div><a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-rfc2396E" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">&lt;matthewb@uvic.ca&gt;</a> wrote:</div>
            </div>
            <div>
              <div id="yiv5774932813">
                <div>Hi Strahil, <br clear="none">
                  <br clear="none">
                  Yes, SELinux was put into permissive mode on the
                  secondary nodes as well: <br clear="none">
                  <br clear="none">
                  [root@pcic-backup01 ~]# sestatus | egrep -i&nbsp; "^SELinux
                  status|mode"<br clear="none">
                  SELinux status:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; enabled<br clear="none">
                  Current mode:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; permissive<br clear="none">
                  Mode from config file:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; enforcing<br clear="none">
                  <br clear="none">
                  [root@pcic-backup02 ~]# sestatus | egrep -i&nbsp; "^SELinux
                  status|mode"<br clear="none">
                  SELinux status:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; enabled<br clear="none">
                  Current mode:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; permissive<br clear="none">
                  Mode from config file:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; enforcing<br clear="none">
                  <br clear="none">
                  The secondary server logs didn't show anything
                  interesting: <br clear="none">
                  <br clear="none">
                  gsyncd.log: <br clear="none">
                  <br clear="none">
                  [2021-03-11 19:15:28.81820] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:28.101819] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:28.107012] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:28.124567] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:28.128145] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:29.425739] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3184}]<br clear="none">
                  [2021-03-11 19:15:29.427448] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:29.433340] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3083}]<br clear="none">
                  [2021-03-11 19:15:29.434452] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3321}]<br clear="none">
                  [2021-03-11 19:15:29.434314] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:29.435575] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:29.439769] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3576}]<br clear="none">
                  [2021-03-11 19:15:29.440998] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:29.454745] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3262}]<br clear="none">
                  [2021-03-11 19:15:29.456192] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:32.594865] I [repce(slave
                  10.0.231.92/data/storage_c/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:32.607815] I [repce(slave
                  10.0.231.93/data/storage_c/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:32.647663] I [repce(slave
                  10.0.231.93/data/storage_b/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:32.656280] I [repce(slave
                  10.0.231.91/data/storage_a/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:32.668299] I [repce(slave
                  10.0.231.93/data/storage_a/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:44.260689] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:44.271457] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:44.271883] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:44.279670] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:44.284261] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1116:connect]
                  GLUSTER: Mounting gluster volume locally...<br clear="none">
                  [2021-03-11 19:15:45.614280] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3419}]<br clear="none">
                  [2021-03-11 19:15:45.615622] I [resource(slave
                  10.0.231.93/data/storage_b/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:45.617986] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3461}]<br clear="none">
                  [2021-03-11 19:15:45.618180] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3380}]<br clear="none">
                  [2021-03-11 19:15:45.619539] I [resource(slave
                  10.0.231.91/data/storage_a/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:45.618999] I [resource(slave
                  10.0.231.93/data/storage_c/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:45.620843] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3361}]<br clear="none">
                  [2021-03-11 19:15:45.621347] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1139:connect]
                  GLUSTER: Mounted gluster volume [{duration=1.3604}]<br clear="none">
                  [2021-03-11 19:15:45.622179] I [resource(slave
                  10.0.231.93/data/storage_a/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:45.622541] I [resource(slave
                  10.0.231.92/data/storage_c/storage):1166:service_loop]
                  GLUSTER: slave listening<br clear="none">
                  [2021-03-11 19:15:47.626054] I [repce(slave
                  10.0.231.91/data/storage_a/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:48.778399] I [repce(slave
                  10.0.231.93/data/storage_c/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:48.778491] I [repce(slave
                  10.0.231.92/data/storage_c/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:48.796854] I [repce(slave
                  10.0.231.93/data/storage_a/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  [2021-03-11 19:15:48.800697] I [repce(slave
                  10.0.231.93/data/storage_b/storage):96:service_loop]
                  RepceServer: terminating on reaching EOF.<br clear="none">
                  <br clear="none">
                  The mnt geo-rep files were also uninteresting: <br clear="none">
                  [2021-03-11 19:15:28.250150] I [MSGID: 100030]
                  [glusterfsd.c:2689:main] 0-/usr/sbin/glusterfs:
                  Started running version [{arg=/usr/sbin/glusterfs},
                  {version=8.3}, {cmdlinestr=/usr/sbin/glusterfs
                  --user-map-root=g<br clear="none">
                  eoaccount --aux-gfid-mount --acl --log-level=INFO
--log-file=/var/log/glusterfs/geo-replication-slaves/storage_10.0.231.81_pcic-backup/mnt-10.0.231.93-data-storage_b-storage.log
                  --volfile-server=localhost --volf<br clear="none">
                  ile-id=pcic-backup --client-pid=-1
                  /var/mountbroker-root/user1000/mtpt-geoaccount-GmVoUI}]
                  <br clear="none">
                  [2021-03-11 19:15:28.253485] I
                  [glusterfsd.c:2424:daemonize] 0-glusterfs: Pid of
                  current running process is 157484<br clear="none">
                  [2021-03-11 19:15:28.267911] I [MSGID: 101190]
                  [event-epoll.c:670:event_dispatch_epoll_worker]
                  0-epoll: Started thread with index [{index=0}]
                  <br clear="none">
                  [2021-03-11 19:15:28.267984] I [MSGID: 101190]
                  [event-epoll.c:670:event_dispatch_epoll_worker]
                  0-epoll: Started thread with index [{index=1}]
                  <br clear="none">
                  [2021-03-11 19:15:28.268371] I
                  [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs:
                  Received list of available volfile servers:
                  10.0.231.82:24007
                  <br clear="none">
                  [2021-03-11 19:15:28.271729] I [MSGID: 101190]
                  [event-epoll.c:670:event_dispatch_epoll_worker]
                  0-epoll: Started thread with index [{index=2}]
                  <br clear="none">
                  [2021-03-11 19:15:28.271762] I [MSGID: 101190]
                  [event-epoll.c:670:event_dispatch_epoll_worker]
                  0-epoll: Started thread with index [{index=3}]
                  <br clear="none">
                  [2021-03-11 19:15:28.272223] I [MSGID: 114020]
                  [client.c:2315:notify] 0-pcic-backup-client-0: parent
                  translators are ready, attempting connect on transport
                  []
                  <br clear="none">
                  [2021-03-11 19:15:28.275883] I [MSGID: 114020]
                  [client.c:2315:notify] 0-pcic-backup-client-1: parent
                  translators are ready, attempting connect on transport
                  []
                  <br clear="none">
                  [2021-03-11 19:15:28.276154] I
                  [rpc-clnt.c:1975:rpc_clnt_reconfig]
                  0-pcic-backup-client-0: changing port to 49153 (from
                  0)<br clear="none">
                  [2021-03-11 19:15:28.276193] I
                  [socket.c:849:__socket_shutdown]
                  0-pcic-backup-client-0: intentional socket
                  shutdown(13)<br clear="none">
                  Final graph:<br clear="none">
                  ...<br clear="none">
+------------------------------------------------------------------------------+<br clear="none">
                  [2021-03-11 19:15:28.282144] I
                  [socket.c:849:__socket_shutdown]
                  0-pcic-backup-client-1: intentional socket
                  shutdown(15)<br clear="none">
                  [2021-03-11 19:15:28.286536] I [MSGID: 114057]
                  [client-handshake.c:1128:select_server_supported_programs]
                  0-pcic-backup-client-0: Using Program
                  [{Program-name=GlusterFS 4.x v1}, {Num=1298437},
                  {Version=400}]
                  <br clear="none">
                  [2021-03-11 19:15:28.287208] I [MSGID: 114046]
                  [client-handshake.c:857:client_setvolume_cbk]
                  0-pcic-backup-client-0: Connected, attached to remote
                  volume [{conn-name=pcic-backup-client-0},
                  {remote_subvol=/data/brick}]
                  <br clear="none">
                  [2021-03-11 19:15:28.290162] I [MSGID: 114057]
                  [client-handshake.c:1128:select_server_supported_programs]
                  0-pcic-backup-client-1: Using Program
                  [{Program-name=GlusterFS 4.x v1}, {Num=1298437},
                  {Version=400}]
                  <br clear="none">
                  [2021-03-11 19:15:28.291122] I [MSGID: 114046]
                  [client-handshake.c:857:client_setvolume_cbk]
                  0-pcic-backup-client-1: Connected, attached to remote
                  volume [{conn-name=pcic-backup-client-1},
                  {remote_subvol=/data/brick}]
                  <br clear="none">
                  [2021-03-11 19:15:28.292703] I
                  [fuse-bridge.c:5300:fuse_init] 0-glusterfs-fuse: FUSE
                  inited with protocol versions: glusterfs 7.24 kernel
                  7.23<br clear="none">
                  [2021-03-11 19:15:28.292730] I
                  [fuse-bridge.c:5926:fuse_graph_sync] 0-fuse: switched
                  to graph 0<br clear="none">
                  [2021-03-11 19:15:32.809518] I
                  [fuse-bridge.c:6242:fuse_thread_proc] 0-fuse:
                  initiating unmount of
                  /var/mountbroker-root/user1000/mtpt-geoaccount-GmVoUI<br clear="none">
                  [2021-03-11 19:15:32.810216] W
                  [glusterfsd.c:1439:cleanup_and_exit]
                  (--&gt;/lib64/libpthread.so.0(+0x7ea5)
                  [0x7ff56b175ea5]
                  --&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
                  [0x55664e67db45]
                  --&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
                  [0x55664e67d9ab] ) 0-: received signum (15), shutting
                  down <br clear="none">
                  [2021-03-11 19:15:32.810253] I
                  [fuse-bridge.c:7074:fini] 0-fuse: Unmounting
                  '/var/mountbroker-root/user1000/mtpt-geoaccount-GmVoUI'.<br clear="none">
                  [2021-03-11 19:15:32.810268] I
                  [fuse-bridge.c:7079:fini] 0-fuse: Closing fuse
                  connection to
                  '/var/mountbroker-root/user1000/mtpt-geoaccount-GmVoUI'.<br clear="none">
                  <br clear="none">
                  <br clear="none">
                  I'm really at a loss for where to go from here, it
                  seems like everything is set up correctly, and it has
                  been working well through the 7.x minor versions, but
                  the jump to 8 has broken something...<br clear="none">
                  <br clear="none">
                  There definitely are lots of changelogs on the servers
                  that fit into the timeframe..... I haven't made any
                  writes to the source volume.... do you think that's
                  the problem? That it needs some new changelog info to
                  sync?
                  <br clear="none">
                  I had been holding off making any writes in case I
                  needed to go back to Gluster7.9 - not sure if that's
                  really a good option or not.
                  <br clear="none">
                  <br clear="none">
                  [root@storage01 changelogs]# for dirs in {a,b,c}; do
                  echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  head; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  total 16G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 13K Aug 13&nbsp; 2020
                  CHANGELOG.1597343197<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 51K Aug 13&nbsp; 2020
                  CHANGELOG.1597343212<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 86K Aug 13&nbsp; 2020
                  CHANGELOG.1597343227<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 99K Aug 13&nbsp; 2020
                  CHANGELOG.1597343242<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 69K Aug 13&nbsp; 2020
                  CHANGELOG.1597343257<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 69K Aug 13&nbsp; 2020
                  CHANGELOG.1597343272<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 72K Aug 13&nbsp; 2020
                  CHANGELOG.1597343287<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  total 3.3G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 13K Aug 13&nbsp; 2020
                  CHANGELOG.1597343197<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 53K Aug 13&nbsp; 2020
                  CHANGELOG.1597343212<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 89K Aug 13&nbsp; 2020
                  CHANGELOG.1597343227<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 89K Aug 13&nbsp; 2020
                  CHANGELOG.1597343242<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 69K Aug 13&nbsp; 2020
                  CHANGELOG.1597343257<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 71K Aug 13&nbsp; 2020
                  CHANGELOG.1597343272<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 86K Aug 13&nbsp; 2020
                  CHANGELOG.1597343287<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  total 9.6G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 16 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 16K Aug 13&nbsp; 2020
                  CHANGELOG.1597343199<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 71K Aug 13&nbsp; 2020
                  CHANGELOG.1597343214<br clear="none">
                  -rw-r--r--. 1 root root 122K Aug 13&nbsp; 2020
                  CHANGELOG.1597343229<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 73K Aug 13&nbsp; 2020
                  CHANGELOG.1597343244<br clear="none">
                  -rw-r--r--. 1 root root 100K Aug 13&nbsp; 2020
                  CHANGELOG.1597343259<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 95K Aug 13&nbsp; 2020
                  CHANGELOG.1597343274<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 92K Aug 13&nbsp; 2020
                  CHANGELOG.1597343289<br clear="none">
                  <br clear="none">
                  [root@storage01 changelogs]# for dirs in {a,b,c}; do
                  echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  tail; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:33
                  CHANGELOG.1614663193<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663731<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663760<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 511 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664043<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 536 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664101<br clear="none">
                  -rw-r--r--. 1 root root 2.8K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664116<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666061<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666554<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663731<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 480 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663745<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663760<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 524 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664043<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 495 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664100<br clear="none">
                  -rw-r--r--. 1 root root 1.6K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664114<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666060<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666553<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663738<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663753<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 395 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664051<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 316 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664094<br clear="none">
                  -rw-r--r--. 1 root root 1.2K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664109<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 174 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664123<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666061<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666553<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp;&nbsp; 6 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 30 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  [root@storage02 ~]# for dirs in {a,b,c}; do echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  head; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  total 9.6G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root 4.2K Aug 13&nbsp; 2020
                  CHANGELOG.1597343193<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 32K Aug 13&nbsp; 2020
                  CHANGELOG.1597343208<br clear="none">
                  -rw-r--r--. 1 root root 107K Aug 13&nbsp; 2020
                  CHANGELOG.1597343223<br clear="none">
                  -rw-r--r--. 1 root root 120K Aug 13&nbsp; 2020
                  CHANGELOG.1597343238<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 72K Aug 13&nbsp; 2020
                  CHANGELOG.1597343253<br clear="none">
                  -rw-r--r--. 1 root root 111K Aug 13&nbsp; 2020
                  CHANGELOG.1597343268<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 91K Aug 13&nbsp; 2020
                  CHANGELOG.1597343283<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  total 16G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root 3.9K Aug 13&nbsp; 2020
                  CHANGELOG.1597343193<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 35K Aug 13&nbsp; 2020
                  CHANGELOG.1597343208<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 85K Aug 13&nbsp; 2020
                  CHANGELOG.1597343223<br clear="none">
                  -rw-r--r--. 1 root root 103K Aug 13&nbsp; 2020
                  CHANGELOG.1597343238<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 70K Aug 13&nbsp; 2020
                  CHANGELOG.1597343253<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 72K Aug 13&nbsp; 2020
                  CHANGELOG.1597343268<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 73K Aug 13&nbsp; 2020
                  CHANGELOG.1597343283<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  total 3.3G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 16 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:51 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 21K Aug 13&nbsp; 2020
                  CHANGELOG.1597343202<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 75K Aug 13&nbsp; 2020
                  CHANGELOG.1597343217<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 92K Aug 13&nbsp; 2020
                  CHANGELOG.1597343232<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 77K Aug 13&nbsp; 2020
                  CHANGELOG.1597343247<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 66K Aug 13&nbsp; 2020
                  CHANGELOG.1597343262<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 84K Aug 13&nbsp; 2020
                  CHANGELOG.1597343277<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 81K Aug 13&nbsp; 2020
                  CHANGELOG.1597343292<br clear="none">
                  <br clear="none">
                  [root@storage02 ~]# for dirs in {a,b,c}; do echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  tail; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663734<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663749<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 395 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664052<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 316 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664096<br clear="none">
                  -rw-r--r--. 1 root root 1.2K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664111<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 174 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664126<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666056<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666560<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663735<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663749<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 511 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664052<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 316 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664096<br clear="none">
                  -rw-r--r--. 1 root root 1.8K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664111<br clear="none">
                  -rw-r--r--. 1 root root 1.4K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664126<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666060<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666556<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663738<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 521 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663752<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 524 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664042<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664057<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 536 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664102<br clear="none">
                  -rw-r--r--. 1 root root 1.6K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664117<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666057<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666550<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp;&nbsp; 6 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 30 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  <br clear="none">
                  [root@storage03 ~]# for dirs in {a,b,c}; do echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  head; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  total 3.4G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:50 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 19K Aug 13&nbsp; 2020
                  CHANGELOG.1597343201<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 66K Aug 13&nbsp; 2020
                  CHANGELOG.1597343215<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 91K Aug 13&nbsp; 2020
                  CHANGELOG.1597343230<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 82K Aug 13&nbsp; 2020
                  CHANGELOG.1597343245<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 64K Aug 13&nbsp; 2020
                  CHANGELOG.1597343259<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 75K Aug 13&nbsp; 2020
                  CHANGELOG.1597343274<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 81K Aug 13&nbsp; 2020
                  CHANGELOG.1597343289<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  total 9.6G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 24 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:51 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 19K Aug 13&nbsp; 2020
                  CHANGELOG.1597343201<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 80K Aug 13&nbsp; 2020
                  CHANGELOG.1597343215<br clear="none">
                  -rw-r--r--. 1 root root 119K Aug 13&nbsp; 2020
                  CHANGELOG.1597343230<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 65K Aug 13&nbsp; 2020
                  CHANGELOG.1597343244<br clear="none">
                  -rw-r--r--. 1 root root 100K Aug 13&nbsp; 2020
                  CHANGELOG.1597343259<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 95K Aug 13&nbsp; 2020
                  CHANGELOG.1597343274<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 92K Aug 13&nbsp; 2020
                  CHANGELOG.1597343289<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  total 16G<br clear="none">
                  drw-------. 3 root root&nbsp;&nbsp; 16 Mar&nbsp; 9 11:34 2021<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 51 Mar 12 09:51 CHANGELOG<br clear="none">
                  -rw-r--r--. 1 root root 3.9K Aug 13&nbsp; 2020
                  CHANGELOG.1597343193<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 35K Aug 13&nbsp; 2020
                  CHANGELOG.1597343208<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 85K Aug 13&nbsp; 2020
                  CHANGELOG.1597343223<br clear="none">
                  -rw-r--r--. 1 root root 103K Aug 13&nbsp; 2020
                  CHANGELOG.1597343238<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 70K Aug 13&nbsp; 2020
                  CHANGELOG.1597343253<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 71K Aug 13&nbsp; 2020
                  CHANGELOG.1597343268<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 73K Aug 13&nbsp; 2020
                  CHANGELOG.1597343283<br clear="none">
                  <br clear="none">
                  [root@storage03 ~]# for dirs in {a,b,c}; do echo
                  "/data/storage_$dirs/storage/.glusterfs/changelogs";
                  ls -lh
                  /data/storage_$dirs/storage/.glusterfs/changelogs |
                  tail; echo ""; done<br clear="none">
                  /data/storage_a/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:33
                  CHANGELOG.1614663183<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663740<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 521 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663755<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 524 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664049<br clear="none">
                  -rw-r--r--. 1 root root 1.9K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664106<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 174 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664121<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666051<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666559<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_b/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 474 Mar&nbsp; 1 21:33
                  CHANGELOG.1614663182<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663739<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663753<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 395 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664049<br clear="none">
                  -rw-r--r--. 1 root root 1.4K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664106<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 174 Mar&nbsp; 1 21:48
                  CHANGELOG.1614664120<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666063<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666557<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 10 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 38 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  /data/storage_c/storage/.glusterfs/changelogs<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 468 Mar&nbsp; 1 21:33
                  CHANGELOG.1614663183<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663740<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 21:42
                  CHANGELOG.1614663754<br clear="none">
                  -rw-r--r--. 1 root root&nbsp; 511 Mar&nbsp; 1 21:47
                  CHANGELOG.1614664048<br clear="none">
                  -rw-r--r--. 1 root root 2.0K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664105<br clear="none">
                  -rw-r--r--. 1 root root 1.4K Mar&nbsp; 1 21:48
                  CHANGELOG.1614664120<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:20
                  CHANGELOG.1614666063<br clear="none">
                  -rw-r--r--. 1 root root&nbsp;&nbsp; 92 Mar&nbsp; 1 22:29
                  CHANGELOG.1614666556<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp;&nbsp; 6 May&nbsp; 7&nbsp; 2020 csnap<br clear="none">
                  drw-------. 2 root root&nbsp;&nbsp; 30 Aug 13&nbsp; 2020 htime<br clear="none">
                  <br clear="none">
                  Thanks,<br clear="none">
                  &nbsp;-Matthew<br clear="none">
                  <div class="yiv5774932813moz-signature">
                    <p>--<br clear="none">
                      Matthew Benstead<br clear="none">
                      System Administrator<br clear="none">
                      <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://pacificclimate.org/">Pacific Climate Impacts
                        Consortium</a><br clear="none">
                      University of Victoria, UH1<br clear="none">
                      PO Box 1800, STN CSC<br clear="none">
                      Victoria, BC, V8W 2Y2<br clear="none">
                      Phone: +1-250-721-8432<br clear="none">
                      Email: <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">
                        matthewb@uvic.ca</a></p>
                  </div>
                  <div class="yiv5774932813yqt2843030402" id="yiv5774932813yqt16014">
                    <div class="yiv5774932813moz-cite-prefix">On 3/11/21
                      11:37 PM, Strahil Nikolov wrote:<br clear="none">
                    </div>
                    <blockquote type="cite">
                      <div>Notice: This message was sent from outside
                        the University of Victoria email system. Please
                        be cautious with links and sensitive
                        information.
                      </div>
                      <br clear="none">
                      <div>Have you checked the secondary volume nodes'
                        logs &amp; SELINUX status ?
                        <div id="yiv5774932813yMail_cursorElementTracker_1615534613030"><br clear="none">
                        </div>
                        <div id="yiv5774932813yMail_cursorElementTracker_1615534613170">Best
                          Regards,</div>
                        <div id="yiv5774932813yMail_cursorElementTracker_1615534618885">Strahil
                          Nikolov<br clear="none">
                          <div id="yiv5774932813yMail_cursorElementTracker_1615534604587"><br clear="none">
                            <blockquote>
                              <div>
                                <div>On Thu, Mar 11, 2021 at 21:36,
                                  Matthew Benstead</div>
                                <div><a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-rfc2396E" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">&lt;matthewb@uvic.ca&gt;</a>
                                  wrote:</div>
                              </div>
                              <div>
                                <div id="yiv5774932813">
                                  <div>Hi Strahil, <br clear="none">
                                    <br clear="none">
                                    It looks like perhaps the
                                    changelog_log_level and log_level
                                    options? I've set them to debug:
                                    <br clear="none">
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> config | egrep -i "log_level"<br clear="none">
                                    changelog_log_level:INFO<br clear="none">
                                    cli_log_level:INFO<br clear="none">
                                    gluster_log_level:INFO<br clear="none">
                                    log_level:INFO<br clear="none">
                                    slave_gluster_log_level:INFO<br clear="none">
                                    slave_log_level:INFO<br clear="none">
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> config changelog_log_level DEBUG<br clear="none">
                                    geo-replication config updated
                                    successfully<br clear="none">
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> config log_level DEBUG<br clear="none">
                                    geo-replication config updated
                                    successfully<br clear="none">
                                    <br clear="none">
                                    <br clear="none">
                                    Then I restarted geo-replication: <br clear="none">
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> start<br clear="none">
                                    Starting geo-replication session
                                    between storage &amp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> has been successful<br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication status <br clear="none">
                                    &nbsp;<br clear="none">
                                    MASTER NODE&nbsp;&nbsp;&nbsp; MASTER VOL&nbsp;&nbsp;&nbsp; MASTER
                                    BRICK&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE USER&nbsp;&nbsp;&nbsp;
                                    SLAVE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    SLAVE NODE&nbsp;&nbsp;&nbsp; STATUS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    CRAWL STATUS&nbsp;&nbsp;&nbsp; LAST_SYNCED&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    Initializing...&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication status <br clear="none">
                                    &nbsp;<br clear="none">
                                    MASTER NODE&nbsp;&nbsp;&nbsp; MASTER VOL&nbsp;&nbsp;&nbsp; MASTER
                                    BRICK&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; SLAVE USER&nbsp;&nbsp;&nbsp;
                                    SLAVE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    SLAVE NODE&nbsp;&nbsp;&nbsp; STATUS&nbsp;&nbsp;&nbsp; CRAWL
                                    STATUS&nbsp;&nbsp;&nbsp; LAST_SYNCED&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
---------------------------------------------------------------------------------------------------------------------------------------------------------------------<br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.91&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.92&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_c/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_b/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    <br clear="none">
                                    10.0.231.93&nbsp;&nbsp;&nbsp; storage&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
                                    /data/storage_a/storage&nbsp;&nbsp;&nbsp;
                                    geoaccount&nbsp;&nbsp;&nbsp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">
ssh://geoaccount@10.0.231.81::pcic-backup</a>&nbsp;&nbsp;&nbsp; N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Faulty&nbsp;&nbsp;&nbsp;
                                    N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; N/A&nbsp;
                                    <br clear="none">
                                    <br clear="none">
                                    [root@storage01 ~]# gluster volume
                                    geo-replication storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> stop<br clear="none">
                                    Stopping geo-replication session
                                    between storage &amp; <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> has been successful<br clear="none">
                                    <br clear="none">
                                    <br clear="none">
                                    The changelogs didn't really show
                                    anything new around changelog
                                    selection: <br clear="none">
                                    <br clear="none">
                                    [root@storage01
                                    storage_10.0.231.81_pcic-backup]#
                                    cat
                                    changes-data-storage_a-storage.log |
                                    egrep "2021-03-11"<br clear="none">
                                    [2021-03-11 19:15:30.552889] I
                                    [MSGID: 132028]
                                    [gf-changelog.c:577:gf_changelog_register_generic]
                                    0-gfchangelog: Registering brick
                                    [{brick=/data/storage_a/storage},
                                    {notify_filter=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.552893] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=0}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.552894] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.553633] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=3}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.553634] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=2}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.554236] D
                                    [rpcsvc.c:2831:rpcsvc_init]
                                    0-rpc-service: RPC service inited.<br clear="none">
                                    [2021-03-11 19:15:30.554403] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: GF-DUMP, Num: 123451501,
                                    Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:30.554420] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:30.554933] D
                                    [socket.c:4485:socket_init]
                                    0-socket.gfchangelog: disabling
                                    nodelay<br clear="none">
                                    [2021-03-11 19:15:30.554944] D
                                    [socket.c:4523:socket_init]
                                    0-socket.gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:30.554949] D
                                    [socket.c:4543:socket_init]
                                    0-socket.gfchangelog: Reconfigured
                                    transport.keepalivecnt=9<br clear="none">
                                    [2021-03-11 19:15:30.555002] I
                                    [socket.c:929:__socket_server_bind]
                                    0-socket.gfchangelog: closing
                                    (AF_UNIX) reuse check socket 23<br clear="none">
                                    [2021-03-11 19:15:30.555324] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: LIBGFCHANGELOG REBORP,
                                    Num: 1886350951, Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:30.555345] D
                                    [rpc-clnt.c:1020:rpc_clnt_connection_init]
                                    0-gfchangelog: defaulting
                                    frame-timeout to 30mins<br clear="none">
                                    [2021-03-11 19:15:30.555351] D
                                    [rpc-clnt.c:1032:rpc_clnt_connection_init]
                                    0-gfchangelog: disable ping-timeout<br clear="none">
                                    [2021-03-11 19:15:30.555358] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:30.555399] D
                                    [socket.c:4485:socket_init]
                                    0-gfchangelog: disabling nodelay<br clear="none">
                                    [2021-03-11 19:15:30.555406] D
                                    [socket.c:4523:socket_init]
                                    0-gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:32.555711] D
                                    [rpc-clnt-ping.c:298:rpc_clnt_start_ping]
                                    0-gfchangelog: ping timeout is 0,
                                    returning<br clear="none">
                                    [2021-03-11 19:15:32.572157] I
                                    [MSGID: 132035]
                                    [gf-history-changelog.c:837:gf_history_changelog]
                                    0-gfchangelog: Requesting historical
                                    changelogs [{start=1614666553},
                                    {end=1615490132}]
                                    <br clear="none">
                                    [2021-03-11 19:15:32.572436] I
                                    [MSGID: 132019]
                                    [gf-history-changelog.c:755:gf_changelog_extract_min_max]
                                    0-gfchangelog: changelogs min max
                                    [{min=1597342860}, {max=1615490121},
                                    {total_changelogs=1256897}]
                                    <br clear="none">
                                    [2021-03-11 19:15:32.621244] E
                                    [MSGID: 132009]
                                    [gf-history-changelog.c:941:gf_history_changelog]
                                    0-gfchangelog: wrong result
                                    [{for=end}, {start=1615490121},
                                    {idx=1256896}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.733182] I
                                    [MSGID: 132028]
                                    [gf-changelog.c:577:gf_changelog_register_generic]
                                    0-gfchangelog: Registering brick
                                    [{brick=/data/storage_a/storage},
                                    {notify_filter=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.733316] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=0}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.733348] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.734031] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=2}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.734085] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=3}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.734591] D
                                    [rpcsvc.c:2831:rpcsvc_init]
                                    0-rpc-service: RPC service inited.<br clear="none">
                                    [2021-03-11 19:15:46.734755] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: GF-DUMP, Num: 123451501,
                                    Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:46.734772] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:46.735256] D
                                    [socket.c:4485:socket_init]
                                    0-socket.gfchangelog: disabling
                                    nodelay<br clear="none">
                                    [2021-03-11 19:15:46.735266] D
                                    [socket.c:4523:socket_init]
                                    0-socket.gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:46.735271] D
                                    [socket.c:4543:socket_init]
                                    0-socket.gfchangelog: Reconfigured
                                    transport.keepalivecnt=9<br clear="none">
                                    [2021-03-11 19:15:46.735325] I
                                    [socket.c:929:__socket_server_bind]
                                    0-socket.gfchangelog: closing
                                    (AF_UNIX) reuse check socket 21<br clear="none">
                                    [2021-03-11 19:15:46.735704] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: LIBGFCHANGELOG REBORP,
                                    Num: 1886350951, Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:46.735721] D
                                    [rpc-clnt.c:1020:rpc_clnt_connection_init]
                                    0-gfchangelog: defaulting
                                    frame-timeout to 30mins<br clear="none">
                                    [2021-03-11 19:15:46.735726] D
                                    [rpc-clnt.c:1032:rpc_clnt_connection_init]
                                    0-gfchangelog: disable ping-timeout<br clear="none">
                                    [2021-03-11 19:15:46.735733] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:46.735771] D
                                    [socket.c:4485:socket_init]
                                    0-gfchangelog: disabling nodelay<br clear="none">
                                    [2021-03-11 19:15:46.735778] D
                                    [socket.c:4523:socket_init]
                                    0-gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:47.618464] D
                                    [rpc-clnt-ping.c:298:rpc_clnt_start_ping]
                                    0-gfchangelog: ping timeout is 0,
                                    returning<br clear="none">
                                    <br clear="none">
                                    <br clear="none">
                                    [root@storage01
                                    storage_10.0.231.81_pcic-backup]#
                                    cat
                                    changes-data-storage_b-storage.log |
                                    egrep "2021-03-11"<br clear="none">
                                    [2021-03-11 19:15:30.611457] I
                                    [MSGID: 132028]
                                    [gf-changelog.c:577:gf_changelog_register_generic]
                                    0-gfchangelog: Registering brick
                                    [{brick=/data/storage_b/storage},
                                    {notify_filter=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.611574] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.611641] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=3}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.611645] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=2}]
                                    <br clear="none">
                                    [2021-03-11 19:15:30.612325] D
                                    [rpcsvc.c:2831:rpcsvc_init]
                                    0-rpc-service: RPC service inited.<br clear="none">
                                    [2021-03-11 19:15:30.612488] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: GF-DUMP, Num: 123451501,
                                    Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:30.612507] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:30.613005] D
                                    [socket.c:4485:socket_init]
                                    0-socket.gfchangelog: disabling
                                    nodelay<br clear="none">
                                    [2021-03-11 19:15:30.613130] D
                                    [socket.c:4523:socket_init]
                                    0-socket.gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:30.613142] D
                                    [socket.c:4543:socket_init]
                                    0-socket.gfchangelog: Reconfigured
                                    transport.keepalivecnt=9<br clear="none">
                                    [2021-03-11 19:15:30.613208] I
                                    [socket.c:929:__socket_server_bind]
                                    0-socket.gfchangelog: closing
                                    (AF_UNIX) reuse check socket 22<br clear="none">
                                    [2021-03-11 19:15:30.613545] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: LIBGFCHANGELOG REBORP,
                                    Num: 1886350951, Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:30.613567] D
                                    [rpc-clnt.c:1020:rpc_clnt_connection_init]
                                    0-gfchangelog: defaulting
                                    frame-timeout to 30mins<br clear="none">
                                    [2021-03-11 19:15:30.613574] D
                                    [rpc-clnt.c:1032:rpc_clnt_connection_init]
                                    0-gfchangelog: disable ping-timeout<br clear="none">
                                    [2021-03-11 19:15:30.613582] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:30.613637] D
                                    [socket.c:4485:socket_init]
                                    0-gfchangelog: disabling nodelay<br clear="none">
                                    [2021-03-11 19:15:30.613654] D
                                    [socket.c:4523:socket_init]
                                    0-gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:32.614273] D
                                    [rpc-clnt-ping.c:298:rpc_clnt_start_ping]
                                    0-gfchangelog: ping timeout is 0,
                                    returning<br clear="none">
                                    [2021-03-11 19:15:32.643628] I
                                    [MSGID: 132035]
                                    [gf-history-changelog.c:837:gf_history_changelog]
                                    0-gfchangelog: Requesting historical
                                    changelogs [{start=1614666552},
                                    {end=1615490132}]
                                    <br clear="none">
                                    [2021-03-11 19:15:32.643716] I
                                    [MSGID: 132019]
                                    [gf-history-changelog.c:755:gf_changelog_extract_min_max]
                                    0-gfchangelog: changelogs min max
                                    [{min=1597342860}, {max=1615490123},
                                    {total_changelogs=1264296}]
                                    <br clear="none">
                                    [2021-03-11 19:15:32.700397] E
                                    [MSGID: 132009]
                                    [gf-history-changelog.c:941:gf_history_changelog]
                                    0-gfchangelog: wrong result
                                    [{for=end}, {start=1615490123},
                                    {idx=1264295}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.832322] I
                                    [MSGID: 132028]
                                    [gf-changelog.c:577:gf_changelog_register_generic]
                                    0-gfchangelog: Registering brick
                                    [{brick=/data/storage_b/storage},
                                    {notify_filter=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.832394] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=0}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.832465] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=1}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.832531] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=2}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.833086] I
                                    [MSGID: 101190]
                                    [event-epoll.c:670:event_dispatch_epoll_worker]
                                    0-epoll: Started thread with index
                                    [{index=3}]
                                    <br clear="none">
                                    [2021-03-11 19:15:46.833648] D
                                    [rpcsvc.c:2831:rpcsvc_init]
                                    0-rpc-service: RPC service inited.<br clear="none">
                                    [2021-03-11 19:15:46.833817] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: GF-DUMP, Num: 123451501,
                                    Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:46.833835] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:46.834368] D
                                    [socket.c:4485:socket_init]
                                    0-socket.gfchangelog: disabling
                                    nodelay<br clear="none">
                                    [2021-03-11 19:15:46.834380] D
                                    [socket.c:4523:socket_init]
                                    0-socket.gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:46.834386] D
                                    [socket.c:4543:socket_init]
                                    0-socket.gfchangelog: Reconfigured
                                    transport.keepalivecnt=9<br clear="none">
                                    [2021-03-11 19:15:46.834441] I
                                    [socket.c:929:__socket_server_bind]
                                    0-socket.gfchangelog: closing
                                    (AF_UNIX) reuse check socket 23<br clear="none">
                                    [2021-03-11 19:15:46.834768] D
                                    [rpcsvc.c:2342:rpcsvc_program_register]
                                    0-rpc-service: New program
                                    registered: LIBGFCHANGELOG REBORP,
                                    Num: 1886350951, Ver: 1, Port: 0<br clear="none">
                                    [2021-03-11 19:15:46.834789] D
                                    [rpc-clnt.c:1020:rpc_clnt_connection_init]
                                    0-gfchangelog: defaulting
                                    frame-timeout to 30mins<br clear="none">
                                    [2021-03-11 19:15:46.834795] D
                                    [rpc-clnt.c:1032:rpc_clnt_connection_init]
                                    0-gfchangelog: disable ping-timeout<br clear="none">
                                    [2021-03-11 19:15:46.834802] D
                                    [rpc-transport.c:278:rpc_transport_load]
                                    0-rpc-transport: attempt to load
                                    file
                                    /usr/lib64/glusterfs/8.3/rpc-transport/socket.so<br clear="none">
                                    [2021-03-11 19:15:46.834845] D
                                    [socket.c:4485:socket_init]
                                    0-gfchangelog: disabling nodelay<br clear="none">
                                    [2021-03-11 19:15:46.834853] D
                                    [socket.c:4523:socket_init]
                                    0-gfchangelog: Configured
                                    transport.tcp-user-timeout=42<br clear="none">
                                    [2021-03-11 19:15:47.618476] D
                                    [rpc-clnt-ping.c:298:rpc_clnt_start_ping]
                                    0-gfchangelog: ping timeout is 0,
                                    returning<br clear="none">
                                    <br clear="none">
                                    <br clear="none">
                                    gsyncd logged a lot but I'm not sure
                                    if it's helpful: <br clear="none">
                                    <br clear="none">
                                    [2021-03-11 19:15:00.41898] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:21.551302] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:21.631470] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:21.718386] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:21.804991] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:26.203999] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:26.284775] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:26.573355] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:26.653752] D
                                    [gsyncd(monitor):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:26.756994] D
                                    [monitor(monitor):304:distribute]
                                    &lt;top&gt;: master bricks:
                                    [{'host': '10.0.231.91', 'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_a/storage'},
                                    {'host': '10.0.2<br clear="none">
                                    31.92', 'uuid':
                                    'ebbd7b74-3cf8-4752-a71c-b0f0ca86c97d',
                                    'dir': '/data/storage_b/storage'},
                                    {'host': '10.0.231.93', 'uuid':
                                    '8b28b331-3780-46bc-9da3-fb27de4ab57b',
                                    'dir': '/data/storage_c/storage'},
                                    {'host': '10.<br clear="none">
                                    0.231.92', 'uuid':
                                    'ebbd7b74-3cf8-4752-a71c-b0f0ca86c97d',
                                    'dir': '/data/storage_a/storage'},
                                    {'host': '10.0.231.93', 'uuid':
                                    '8b28b331-3780-46bc-9da3-fb27de4ab57b',
                                    'dir': '/data/storage_b/storage'},
                                    {'host': '<br clear="none">
                                    10.0.231.91', 'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_c/storage'},
                                    {'host': '10.0.231.93', 'uuid':
                                    '8b28b331-3780-46bc-9da3-fb27de4ab57b',
                                    'dir': '/data/storage_a/storage'},
                                    {'host'<br clear="none">
                                    : '10.0.231.91', 'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_b/storage'},
                                    {'host': '10.0.231.92', 'uuid':
                                    'ebbd7b74-3cf8-4752-a71c-b0f0ca86c97d',
                                    'dir': '/data/storage_c/storage'}]<br clear="none">
                                    [2021-03-11 19:15:26.757252] D
                                    [monitor(monitor):314:distribute]
                                    &lt;top&gt;: slave SSH gateway:
                                    <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81" target="_blank" href="mailto:geoaccount@10.0.231.81">
                                      geoaccount@10.0.231.81</a><br clear="none">
                                    [2021-03-11 19:15:27.416235] D
                                    [monitor(monitor):334:distribute]
                                    &lt;top&gt;: slave bricks: [{'host':
                                    '10.0.231.81', 'uuid':
                                    'b88dea4f-31ec-416a-9110-3ccdc3910acd',
                                    'dir': '/data/brick'}, {'host':
                                    '10.0.231.82', 'uuid<br clear="none">
                                    ':
                                    'be50a8de-3934-4fee-a80d-8e2e99017902',
                                    'dir': '/data/brick'}]<br clear="none">
                                    [2021-03-11 19:15:27.416825] D
                                    [syncdutils(monitor):932:is_hot]
                                    Volinfo: brickpath:
                                    '10.0.231.91:/data/storage_a/storage'<br clear="none">
                                    [2021-03-11 19:15:27.417273] D
                                    [syncdutils(monitor):932:is_hot]
                                    Volinfo: brickpath:
                                    '10.0.231.91:/data/storage_c/storage'<br clear="none">
                                    [2021-03-11 19:15:27.417515] D
                                    [syncdutils(monitor):932:is_hot]
                                    Volinfo: brickpath:
                                    '10.0.231.91:/data/storage_b/storage'<br clear="none">
                                    [2021-03-11 19:15:27.417763] D
                                    [monitor(monitor):348:distribute]
                                    &lt;top&gt;: worker specs:
                                    [({'host': '10.0.231.91', 'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_a/storage'},
                                    ('geoaccount@10.<br clear="none">
                                    0.231.81',
                                    'b88dea4f-31ec-416a-9110-3ccdc3910acd'),
                                    '1', False), ({'host':
                                    '10.0.231.91', 'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_c/storage'},
                                    ('<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.82" target="_blank" href="mailto:geoaccount@10.0.231.82">geoaccount@10.0.231.82</a>',
                                    'be50a8de-3<br clear="none">
                                    934-4fee-a80d-8e2e99017902'), '2',
                                    False), ({'host': '10.0.231.91',
                                    'uuid':
                                    'afc24654-2887-41f6-a9c2-8e835de243b6',
                                    'dir': '/data/storage_b/storage'},
                                    ('<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.82" target="_blank" href="mailto:geoaccount@10.0.231.82">geoaccount@10.0.231.82</a>',
                                    'be50a8de-3934-4fee-a80d-8e2e9901<br clear="none">
                                    7902'), '3', False)]<br clear="none">
                                    [2021-03-11 19:15:27.425009] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_c/storage},
                                    {slave_node=10.0.231.82}]<br clear="none">
                                    [2021-03-11 19:15:27.426764] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_b/storage},
                                    {slave_node=10.0.231.82}]<br clear="none">
                                    [2021-03-11 19:15:27.429208] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_a/storage},
                                    {slave_node=10.0.231.81}]<br clear="none">
                                    [2021-03-11 19:15:27.432280] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:27.434195] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:27.436584] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:27.478806] D
                                    [gsyncd(worker
                                    /data/storage_c/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:27.478852] D
                                    [gsyncd(worker
                                    /data/storage_b/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:27.480104] D
                                    [gsyncd(worker
                                    /data/storage_a/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:27.500456] I
                                    [resource(worker
                                    /data/storage_c/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:27.501375] I
                                    [resource(worker
                                    /data/storage_b/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:27.502003] I
                                    [resource(worker
                                    /data/storage_a/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:27.525511] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192117:140572692309824:1615490127.53
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:27.525582] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192115:139891296405312:1615490127.53
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:27.526089] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192114:140388828780352:1615490127.53
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:29.435985] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192117:140572692309824:1615490127.53
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.436213] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192117:140572692309824:1615490129.44
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:29.437136] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192117:140572692309824:1615490129.44
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.437268] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192117:140572692309824:1615490129.44
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:29.437915] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192117:140572692309824:1615490129.44
                                    pid -&gt; 157321<br clear="none">
                                    [2021-03-11 19:15:29.438004] I
                                    [resource(worker
                                    /data/storage_a/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=1.9359}]<br clear="none">
                                    [2021-03-11 19:15:29.438072] I
                                    [resource(worker
                                    /data/storage_a/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:29.494538] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192115:139891296405312:1615490127.53
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.494748] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192115:139891296405312:1615490129.49
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:29.495290] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192115:139891296405312:1615490129.49
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.495400] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192115:139891296405312:1615490129.5
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:29.495872] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192115:139891296405312:1615490129.5
                                    pid -&gt; 88110<br clear="none">
                                    [2021-03-11 19:15:29.495960] I
                                    [resource(worker
                                    /data/storage_b/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=1.9944}]<br clear="none">
                                    [2021-03-11 19:15:29.496028] I
                                    [resource(worker
                                    /data/storage_b/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:29.501255] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192114:140388828780352:1615490127.53
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.501454] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192114:140388828780352:1615490129.5
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:29.502258] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192114:140388828780352:1615490129.5
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:29.502444] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192114:140388828780352:1615490129.5
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:29.503140] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192114:140388828780352:1615490129.5
                                    pid -&gt; 88111<br clear="none">
                                    [2021-03-11 19:15:29.503232] I
                                    [resource(worker
                                    /data/storage_c/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=2.0026}]<br clear="none">
                                    [2021-03-11 19:15:29.503302] I
                                    [resource(worker
                                    /data/storage_c/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:29.533899] D
                                    [resource(worker
                                    /data/storage_a/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:29.595736] D
                                    [resource(worker
                                    /data/storage_b/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:29.601110] D
                                    [resource(worker
                                    /data/storage_c/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:30.541542] D
                                    [resource(worker
                                    /data/storage_a/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:30.541816] I
                                    [resource(worker
                                    /data/storage_a/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.1037}]<br clear="none">
                                    [2021-03-11 19:15:30.541887] I
                                    [subcmds(worker
                                    /data/storage_a/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:30.542042] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:30.542125] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_a/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:30.543323] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:30.544460] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:30.552103] D
                                    [master(worker
                                    /data/storage_a/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage<br clear="none">
                                    [2021-03-11 19:15:30.602937] D
                                    [resource(worker
                                    /data/storage_b/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:30.603117] I
                                    [resource(worker
                                    /data/storage_b/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.1070}]<br clear="none">
                                    [2021-03-11 19:15:30.603197] I
                                    [subcmds(worker
                                    /data/storage_b/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:30.603353] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:30.603338] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_b/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:30.604620] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:30.605600] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:30.608365] D
                                    [resource(worker
                                    /data/storage_c/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:30.608534] I
                                    [resource(worker
                                    /data/storage_c/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.1052}]<br clear="none">
                                    [2021-03-11 19:15:30.608612] I
                                    [subcmds(worker
                                    /data/storage_c/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:30.608762] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:30.608779] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_c/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:30.610033] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:30.610637] D
                                    [master(worker
                                    /data/storage_b/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage<br clear="none">
                                    [2021-03-11 19:15:30.610970] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:30.616197] D
                                    [master(worker
                                    /data/storage_c/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage<br clear="none">
                                    [2021-03-11 19:15:31.371265] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:31.451000] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:31.537257] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:31.623800] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:32.555840] D
                                    [master(worker
                                    /data/storage_a/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage<br clear="none">
                                    [2021-03-11 19:15:32.556051] D
                                    [master(worker
                                    /data/storage_a/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage<br clear="none">
                                    [2021-03-11 19:15:32.556122] D
                                    [master(worker
                                    /data/storage_a/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage<br clear="none">
                                    [2021-03-11 19:15:32.556179] I
                                    [master(worker
                                    /data/storage_a/storage):1645:register]
                                    _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage}]<br clear="none">
                                    [2021-03-11 19:15:32.556359] I
                                    [resource(worker
                                    /data/storage_a/storage):1292:service_loop]
                                    GLUSTER: Register time
                                    [{time=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.556823] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192117:140570487928576:1615490132.56
                                    keep_alive(None,) ...<br clear="none">
                                    [2021-03-11 19:15:32.558429] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192117:140570487928576:1615490132.56
                                    keep_alive -&gt; 1<br clear="none">
                                    [2021-03-11 19:15:32.558974] D
                                    [master(worker
                                    /data/storage_a/storage):540:crawlwrap]
                                    _GMaster: primary master with volume
                                    id
                                    cf94a8f2-324b-40b3-bf72-c3766100ea99
                                    ...<br clear="none">
                                    [2021-03-11 19:15:32.567478] I
                                    [gsyncdstatus(worker
                                    /data/storage_a/storage):281:set_active]
                                    GeorepStatus: Worker Status Change
                                    [{status=Active}]<br clear="none">
                                    [2021-03-11 19:15:32.571824] I
                                    [gsyncdstatus(worker
                                    /data/storage_a/storage):253:set_worker_crawl_status]
                                    GeorepStatus: Crawl Status Change
                                    [{status=History Crawl}]<br clear="none">
                                    [2021-03-11 19:15:32.572052] I
                                    [master(worker
                                    /data/storage_a/storage):1559:crawl]
                                    _GMaster: starting history crawl
                                    [{turns=1}, {stime=(1614666553, 0)},
                                    {entry_stime=(1614664115, 0)},
                                    {etime=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.614506] D
                                    [master(worker
                                    /data/storage_b/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage<br clear="none">
                                    [2021-03-11 19:15:32.614701] D
                                    [master(worker
                                    /data/storage_b/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage<br clear="none">
                                    [2021-03-11 19:15:32.614788] D
                                    [master(worker
                                    /data/storage_b/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage<br clear="none">
                                    [2021-03-11 19:15:32.614845] I
                                    [master(worker
                                    /data/storage_b/storage):1645:register]
                                    _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage}]<br clear="none">
                                    [2021-03-11 19:15:32.615000] I
                                    [resource(worker
                                    /data/storage_b/storage):1292:service_loop]
                                    GLUSTER: Register time
                                    [{time=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.615586] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192115:139889215526656:1615490132.62
                                    keep_alive(None,) ...<br clear="none">
                                    [2021-03-11 19:15:32.617373] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192115:139889215526656:1615490132.62
                                    keep_alive -&gt; 1<br clear="none">
                                    [2021-03-11 19:15:32.618144] D
                                    [master(worker
                                    /data/storage_b/storage):540:crawlwrap]
                                    _GMaster: primary master with volume
                                    id
                                    cf94a8f2-324b-40b3-bf72-c3766100ea99
                                    ...<br clear="none">
                                    [2021-03-11 19:15:32.619323] D
                                    [master(worker
                                    /data/storage_c/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage<br clear="none">
                                    [2021-03-11 19:15:32.619491] D
                                    [master(worker
                                    /data/storage_c/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage<br clear="none">
                                    [2021-03-11 19:15:32.619739] D
                                    [master(worker
                                    /data/storage_c/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage<br clear="none">
                                    [2021-03-11 19:15:32.619863] I
                                    [master(worker
                                    /data/storage_c/storage):1645:register]
                                    _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage}]<br clear="none">
                                    [2021-03-11 19:15:32.620040] I
                                    [resource(worker
                                    /data/storage_c/storage):1292:service_loop]
                                    GLUSTER: Register time
                                    [{time=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.620599] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192114:140386886469376:1615490132.62
                                    keep_alive(None,) ...<br clear="none">
                                    [2021-03-11 19:15:32.621397] E
                                    [resource(worker
                                    /data/storage_a/storage):1312:service_loop]
                                    GLUSTER: Changelog History Crawl
                                    failed [{error=[Errno 0] Success}]<br clear="none">
                                    [2021-03-11 19:15:32.622035] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192114:140386886469376:1615490132.62
                                    keep_alive -&gt; 1<br clear="none">
                                    [2021-03-11 19:15:32.622701] D
                                    [master(worker
                                    /data/storage_c/storage):540:crawlwrap]
                                    _GMaster: primary master with volume
                                    id
                                    cf94a8f2-324b-40b3-bf72-c3766100ea99
                                    ...<br clear="none">
                                    [2021-03-11 19:15:32.627031] I
                                    [gsyncdstatus(worker
                                    /data/storage_b/storage):281:set_active]
                                    GeorepStatus: Worker Status Change
                                    [{status=Active}]<br clear="none">
                                    [2021-03-11 19:15:32.643184] I
                                    [gsyncdstatus(worker
                                    /data/storage_b/storage):253:set_worker_crawl_status]
                                    GeorepStatus: Crawl Status Change
                                    [{status=History Crawl}]<br clear="none">
                                    [2021-03-11 19:15:32.643528] I
                                    [master(worker
                                    /data/storage_b/storage):1559:crawl]
                                    _GMaster: starting history crawl
                                    [{turns=1}, {stime=(1614666552, 0)},
                                    {entry_stime=(1614664113, 0)},
                                    {etime=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.645148] I
                                    [gsyncdstatus(worker
                                    /data/storage_c/storage):281:set_active]
                                    GeorepStatus: Worker Status Change
                                    [{status=Active}]<br clear="none">
                                    [2021-03-11 19:15:32.649631] I
                                    [gsyncdstatus(worker
                                    /data/storage_c/storage):253:set_worker_crawl_status]
                                    GeorepStatus: Crawl Status Change
                                    [{status=History Crawl}]<br clear="none">
                                    [2021-03-11 19:15:32.649882] I
                                    [master(worker
                                    /data/storage_c/storage):1559:crawl]
                                    _GMaster: starting history crawl
                                    [{turns=1}, {stime=(1614666552, 0)},
                                    {entry_stime=(1614664108, 0)},
                                    {etime=1615490132}]<br clear="none">
                                    [2021-03-11 19:15:32.650907] E
                                    [resource(worker
                                    /data/storage_c/storage):1312:service_loop]
                                    GLUSTER: Changelog History Crawl
                                    failed [{error=[Errno 0] Success}]<br clear="none">
                                    [2021-03-11 19:15:32.700489] E
                                    [resource(worker
                                    /data/storage_b/storage):1312:service_loop]
                                    GLUSTER: Changelog History Crawl
                                    failed [{error=[Errno 0] Success}]<br clear="none">
                                    [2021-03-11 19:15:33.545886] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_a/storage}]<br clear="none">
                                    [2021-03-11 19:15:33.550487] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:33.606991] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_b/storage}]<br clear="none">
                                    [2021-03-11 19:15:33.611573] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:33.612337] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_c/storage}]<br clear="none">
                                    [2021-03-11 19:15:33.615777] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:34.684247] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:34.764971] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:34.851174] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:34.937166] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:36.994502] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:37.73805] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:37.159288] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:37.244153] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:38.916510] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:38.997649] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:39.84816] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:39.172045] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:40.896359] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:40.976135] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:41.62052] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:41.147902] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:42.791997] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:42.871239] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:42.956609] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:43.42473] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:43.566190] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Initializing...}]<br clear="none">
                                    [2021-03-11 19:15:43.566400] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_a/storage},
                                    {slave_node=10.0.231.81}]<br clear="none">
                                    [2021-03-11 19:15:43.572240] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:43.612744] D
                                    [gsyncd(worker
                                    /data/storage_a/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:43.625689] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Initializing...}]<br clear="none">
                                    [2021-03-11 19:15:43.626060] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_b/storage},
                                    {slave_node=10.0.231.82}]<br clear="none">
                                    [2021-03-11 19:15:43.632287] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Initializing...}]<br clear="none">
                                    [2021-03-11 19:15:43.632137] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:43.632508] I
                                    [monitor(monitor):160:monitor]
                                    Monitor: starting gsyncd worker
                                    [{brick=/data/storage_c/storage},
                                    {slave_node=10.0.231.82}]<br clear="none">
                                    [2021-03-11 19:15:43.635565] I
                                    [resource(worker
                                    /data/storage_a/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:43.637835] D
                                    [monitor(monitor):195:monitor]
                                    Monitor: Worker would mount volume
                                    privately<br clear="none">
                                    [2021-03-11 19:15:43.661304] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192535:140367272073024:1615490143.66
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:43.674499] D
                                    [gsyncd(worker
                                    /data/storage_b/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:43.680706] D
                                    [gsyncd(worker
                                    /data/storage_c/storage):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:43.693773] I
                                    [resource(worker
                                    /data/storage_b/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:43.700957] I
                                    [resource(worker
                                    /data/storage_c/storage):1387:connect_remote]
                                    SSH: Initializing SSH connection
                                    between master and slave...<br clear="none">
                                    [2021-03-11 19:15:43.717686] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192539:139907321804608:1615490143.72
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:43.725369] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192541:140653101852480:1615490143.73
                                    __repce_version__() ...<br clear="none">
                                    [2021-03-11 19:15:44.289117] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:44.375693] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:44.472251] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:44.558429] D
                                    [gsyncd(status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:45.619694] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192535:140367272073024:1615490143.66
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.619930] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192535:140367272073024:1615490145.62
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:45.621191] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192535:140367272073024:1615490145.62
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.621332] D
                                    [repce(worker
                                    /data/storage_a/storage):195:push]
                                    RepceClient: call
                                    192535:140367272073024:1615490145.62
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:45.621859] D
                                    [repce(worker
                                    /data/storage_a/storage):215:__call__]
                                    RepceClient: call
                                    192535:140367272073024:1615490145.62
                                    pid -&gt; 158229<br clear="none">
                                    [2021-03-11 19:15:45.621939] I
                                    [resource(worker
                                    /data/storage_a/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=1.9862}]<br clear="none">
                                    [2021-03-11 19:15:45.622000] I
                                    [resource(worker
                                    /data/storage_a/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:45.714468] D
                                    [resource(worker
                                    /data/storage_a/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:45.718441] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192541:140653101852480:1615490143.73
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.718643] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192541:140653101852480:1615490145.72
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:45.719492] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192541:140653101852480:1615490145.72
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.719772] D
                                    [repce(worker
                                    /data/storage_c/storage):195:push]
                                    RepceClient: call
                                    192541:140653101852480:1615490145.72
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:45.720202] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192539:139907321804608:1615490143.72
                                    __repce_version__ -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.720381] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192539:139907321804608:1615490145.72
                                    version() ...<br clear="none">
                                    [2021-03-11 19:15:45.720463] D
                                    [repce(worker
                                    /data/storage_c/storage):215:__call__]
                                    RepceClient: call
                                    192541:140653101852480:1615490145.72
                                    pid -&gt; 88921<br clear="none">
                                    [2021-03-11 19:15:45.720694] I
                                    [resource(worker
                                    /data/storage_c/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=2.0196}]<br clear="none">
                                    [2021-03-11 19:15:45.720882] I
                                    [resource(worker
                                    /data/storage_c/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:45.721146] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192539:139907321804608:1615490145.72
                                    version -&gt; 1.0<br clear="none">
                                    [2021-03-11 19:15:45.721271] D
                                    [repce(worker
                                    /data/storage_b/storage):195:push]
                                    RepceClient: call
                                    192539:139907321804608:1615490145.72
                                    pid() ...<br clear="none">
                                    [2021-03-11 19:15:45.721795] D
                                    [repce(worker
                                    /data/storage_b/storage):215:__call__]
                                    RepceClient: call
                                    192539:139907321804608:1615490145.72
                                    pid -&gt; 88924<br clear="none">
                                    [2021-03-11 19:15:45.721911] I
                                    [resource(worker
                                    /data/storage_b/storage):1436:connect_remote]
                                    SSH: SSH connection between master
                                    and slave established.
                                    [{duration=2.0280}]<br clear="none">
                                    [2021-03-11 19:15:45.721993] I
                                    [resource(worker
                                    /data/storage_b/storage):1116:connect]
                                    GLUSTER: Mounting gluster volume
                                    locally...<br clear="none">
                                    [2021-03-11 19:15:45.816891] D
                                    [resource(worker
                                    /data/storage_b/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:45.816960] D
                                    [resource(worker
                                    /data/storage_c/storage):880:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount in place<br clear="none">
                                    [2021-03-11 19:15:46.721534] D
                                    [resource(worker
                                    /data/storage_a/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:46.721726] I
                                    [resource(worker
                                    /data/storage_a/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.0997}]<br clear="none">
                                    [2021-03-11 19:15:46.721796] I
                                    [subcmds(worker
                                    /data/storage_a/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:46.721971] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:46.722122] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_a/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:46.723871] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:46.725100] D
                                    [master(worker
                                    /data/storage_a/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:46.732400] D
                                    [master(worker
                                    /data/storage_a/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage<br clear="none">
                                    [2021-03-11 19:15:46.823477] D
                                    [resource(worker
                                    /data/storage_c/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:46.823645] I
                                    [resource(worker
                                    /data/storage_c/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.1027}]<br clear="none">
                                    [2021-03-11 19:15:46.823754] I
                                    [subcmds(worker
                                    /data/storage_c/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:46.823932] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:46.823904] D
                                    [resource(worker
                                    /data/storage_b/storage):964:inhibit]
                                    DirectMounter: auxiliary glusterfs
                                    mount prepared<br clear="none">
                                    [2021-03-11 19:15:46.823930] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_c/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:46.824103] I
                                    [resource(worker
                                    /data/storage_b/storage):1139:connect]
                                    GLUSTER: Mounted gluster volume
                                    [{duration=1.1020}]<br clear="none">
                                    [2021-03-11 19:15:46.824184] I
                                    [subcmds(worker
                                    /data/storage_b/storage):84:subcmd_worker]
                                    &lt;top&gt;: Worker spawn
                                    successful. Acknowledging back to
                                    monitor<br clear="none">
                                    [2021-03-11 19:15:46.824340] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=xsync}]<br clear="none">
                                    [2021-03-11 19:15:46.824321] D
                                    [monitor(monitor):222:monitor]
                                    Monitor:
                                    worker(/data/storage_b/storage)
                                    connected<br clear="none">
                                    [2021-03-11 19:15:46.825100] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:46.825414] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode [{mode=changelog}]<br clear="none">
                                    [2021-03-11 19:15:46.826375] D
                                    [master(worker
                                    /data/storage_b/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:46.826574] D
                                    [master(worker
                                    /data/storage_c/storage):105:gmaster_builder]
                                    &lt;top&gt;: setting up change
                                    detection mode
                                    [{mode=changeloghistory}]<br clear="none">
                                    [2021-03-11 19:15:46.831506] D
                                    [master(worker
                                    /data/storage_b/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage<br clear="none">
                                    [2021-03-11 19:15:46.833168] D
                                    [master(worker
                                    /data/storage_c/storage):778:setup_working_dir]
                                    _GMaster: changelog working dir
/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage<br clear="none">
                                    [2021-03-11 19:15:47.275141] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:47.320247] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:47.570877] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:47.615571] D
                                    [gsyncd(config-get):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:47.620893] E
                                    [syncdutils(worker
                                    /data/storage_a/storage):325:log_raise_exception]
                                    &lt;top&gt;: connection to peer is
                                    broken<br clear="none">
                                    [2021-03-11 19:15:47.620939] E
                                    [syncdutils(worker
                                    /data/storage_c/storage):325:log_raise_exception]
                                    &lt;top&gt;: connection to peer is
                                    broken<br clear="none">
                                    [2021-03-11 19:15:47.621668] E
                                    [syncdutils(worker
                                    /data/storage_a/storage):847:errlog]
                                    Popen: command returned error
                                    [{cmd=ssh
                                    -oPasswordAuthentication=no
                                    -oStrictHostKeyChecking=no -i
                                    /var/lib/glusterd/geo-replication/secret.pem
                                    -p 22 -oControlMaster=auto -S
                                    /tmp/gsyncd-aux-ssh-_AyCOc/79fa3dc75e30f532b4a40bc08c2b10a1.sock
                                    <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81" target="_blank" href="mailto:geoaccount@10.0.231.81">
                                      geoaccount@10.0.231.81</a>
                                    /nonexistent/gsyncd slave storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> --master-node 10.0.231.91
                                    --master-node-id
                                    afc24654-2887-41f6-a9c2-8e835de243b6
                                    --master-brick
                                    /data/storage_a/storage --local-node
                                    10.0.231.81 --local-node-id
                                    b88dea4f-31ec-416a-9110-3ccdc3910acd
                                    --slave-timeout 120
                                    --slave-log-level INFO
                                    --slave-gluster-log-level INFO
                                    --slave-gluster-command-dir
                                    /usr/sbin --master-dist-count 3},
                                    {error=255}]<br clear="none">
                                    [2021-03-11 19:15:47.621685] E
                                    [syncdutils(worker
                                    /data/storage_c/storage):847:errlog]
                                    Popen: command returned error
                                    [{cmd=ssh
                                    -oPasswordAuthentication=no
                                    -oStrictHostKeyChecking=no -i
                                    /var/lib/glusterd/geo-replication/secret.pem
                                    -p 22 -oControlMaster=auto -S
                                    /tmp/gsyncd-aux-ssh-WOgOEu/e15fc58bb13552de0710eaf018209548.sock
                                    <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.82" target="_blank" href="mailto:geoaccount@10.0.231.82">
                                      geoaccount@10.0.231.82</a>
                                    /nonexistent/gsyncd slave storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> --master-node 10.0.231.91
                                    --master-node-id
                                    afc24654-2887-41f6-a9c2-8e835de243b6
                                    --master-brick
                                    /data/storage_c/storage --local-node
                                    10.0.231.82 --local-node-id
                                    be50a8de-3934-4fee-a80d-8e2e99017902
                                    --slave-timeout 120
                                    --slave-log-level INFO
                                    --slave-gluster-log-level INFO
                                    --slave-gluster-command-dir
                                    /usr/sbin --master-dist-count 3},
                                    {error=255}]<br clear="none">
                                    [2021-03-11 19:15:47.621776] E
                                    [syncdutils(worker
                                    /data/storage_a/storage):851:logerr]
                                    Popen: ssh&gt; Killed by signal 15.<br clear="none">
                                    [2021-03-11 19:15:47.621819] E
                                    [syncdutils(worker
                                    /data/storage_c/storage):851:logerr]
                                    Popen: ssh&gt; Killed by signal 15.<br clear="none">
                                    [2021-03-11 19:15:47.621850] E
                                    [syncdutils(worker
                                    /data/storage_b/storage):325:log_raise_exception]
                                    &lt;top&gt;: connection to peer is
                                    broken<br clear="none">
                                    [2021-03-11 19:15:47.622437] E
                                    [syncdutils(worker
                                    /data/storage_b/storage):847:errlog]
                                    Popen: command returned error
                                    [{cmd=ssh
                                    -oPasswordAuthentication=no
                                    -oStrictHostKeyChecking=no -i
                                    /var/lib/glusterd/geo-replication/secret.pem
                                    -p 22 -oControlMaster=auto -S
                                    /tmp/gsyncd-aux-ssh-Vy935W/e15fc58bb13552de0710eaf018209548.sock
                                    <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.82" target="_blank" href="mailto:geoaccount@10.0.231.82">
                                      geoaccount@10.0.231.82</a>
                                    /nonexistent/gsyncd slave storage <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">
geoaccount@10.0.231.81::pcic-backup</a> --master-node 10.0.231.91
                                    --master-node-id
                                    afc24654-2887-41f6-a9c2-8e835de243b6
                                    --master-brick
                                    /data/storage_b/storage --local-node
                                    10.0.231.82 --local-node-id
                                    be50a8de-3934-4fee-a80d-8e2e99017902
                                    --slave-timeout 120
                                    --slave-log-level INFO
                                    --slave-gluster-log-level INFO
                                    --slave-gluster-command-dir
                                    /usr/sbin --master-dist-count 3},
                                    {error=255}]<br clear="none">
                                    [2021-03-11 19:15:47.622556] E
                                    [syncdutils(worker
                                    /data/storage_b/storage):851:logerr]
                                    Popen: ssh&gt; Killed by signal 15.<br clear="none">
                                    [2021-03-11 19:15:47.723756] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_a/storage}]<br clear="none">
                                    [2021-03-11 19:15:47.731405] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:47.825223] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_c/storage}]<br clear="none">
                                    [2021-03-11 19:15:47.825685] I
                                    [monitor(monitor):228:monitor]
                                    Monitor: worker died in startup
                                    phase
                                    [{brick=/data/storage_b/storage}]<br clear="none">
                                    [2021-03-11 19:15:47.829011] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:47.830965] I
                                    [gsyncdstatus(monitor):248:set_worker_status]
                                    GeorepStatus: Worker Status Change
                                    [{status=Faulty}]<br clear="none">
                                    [2021-03-11 19:15:48.669634] D
                                    [gsyncd(monitor-status):303:main]
                                    &lt;top&gt;: Using session config
                                    file
[{path=/var/lib/glusterd/geo-replication/storage_10.0.231.81_pcic-backup/gsyncd.conf}]<br clear="none">
                                    [2021-03-11 19:15:48.683784] I
                                    [subcmds(monitor-status):29:subcmd_monitor_status]
                                    &lt;top&gt;: Monitor Status Change
                                    [{status=Stopped}]<br clear="none">
                                    <br clear="none">
                                    <br clear="none">
                                    Thanks,<br clear="none">
                                    &nbsp;-Matthew<br clear="none">
                                    <div class="yiv5774932813moz-signature">
                                      <p><br clear="none">
                                      </p>
                                    </div>
                                    <div class="yiv5774932813moz-cite-prefix">On
                                      3/11/21 9:37 AM, Strahil Nikolov
                                      wrote:<br clear="none">
                                    </div>
                                    <blockquote type="cite">
                                      <pre class="yiv5774932813moz-quote-pre">Notice: This message was sent from outside the University of Victoria email system. Please be cautious with links and sensitive information.


I think you have to increase the debug logs for geo-rep session.
I will try to find the command necessary to increase it.


Best Regards,
Strahil Nikolov






В четвъртък, 11 март 2021 г., 00:38:41 ч. Гринуич+2, Matthew Benstead <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-rfc2396E" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">&lt;matthewb@uvic.ca&gt;</a> написа:





Thanks Strahil,

Right - I had come across your message in early January that v8 from the CentOS Sig was missing the SELinux rules, and had put SELinux into permissive mode after the upgrade when I saw denied messages in the audit logs.

[root@storage01 ~]# sestatus | egrep "^SELinux status|[mM]ode"
SELinux status:                 enabled
Current mode:                   permissive
Mode from config file:          enforcing

Yes - I am using an unprivileged user for georep:

[root@pcic-backup01 ~]# gluster-mountbroker status
+-------------+-------------+---------------------------+--------------+--------------------------+
|&nbsp;&nbsp;&nbsp;&nbsp; NODE    | NODE STATUS |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MOUNT ROOT        |&nbsp;&nbsp;&nbsp; GROUP     |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; USERS           |
+-------------+-------------+---------------------------+--------------+--------------------------+
| 10.0.231.82 |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) | geogroup(OK) | geoaccount(pcic-backup)  |
|&nbsp; localhost  |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) | geogroup(OK) | geoaccount(pcic-backup)  |
+-------------+-------------+---------------------------+--------------+--------------------------+

[root@pcic-backup02 ~]# gluster-mountbroker status
+-------------+-------------+---------------------------+--------------+--------------------------+
|&nbsp;&nbsp;&nbsp;&nbsp; NODE    | NODE STATUS |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MOUNT ROOT        |&nbsp;&nbsp;&nbsp; GROUP     |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; USERS           |
+-------------+-------------+---------------------------+--------------+--------------------------+
| 10.0.231.81 |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) | geogroup(OK) | geoaccount(pcic-backup)  |
|&nbsp; localhost  |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UP | /var/mountbroker-root(OK) | geogroup(OK) | geoaccount(pcic-backup)  |
+-------------+-------------+---------------------------+--------------+--------------------------+

Thanks,
 -Matthew


--
Matthew Benstead
System AdministratorPacific Climate Impacts ConsortiumUniversity of Victoria, UH1PO Box 1800, STN CSCVictoria, BC, V8W 2Y2Phone: +1-250-721-8432Email: <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">matthewb@uvic.ca</a>


On 3/10/21 2:11 PM, Strahil Nikolov wrote:


</pre>
                                      <blockquote type="cite">
                                        <pre class="yiv5774932813moz-quote-pre">&nbsp;&nbsp;
&nbsp;&nbsp;Notice: This message was sent from outside the University of Victoria email system. Please be cautious with links and sensitive information.


I have tested georep on v8.3 and it was running quite well untill you involve SELINUX.



Are you using SELINUX ?

Are you using unprivileged user for the georep ?




Also, you can check <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/sect-troubleshooting_geo-replication">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/sect-troubleshooting_geo-replication</a> .




Best Regards,

Strahil Nikolov


</pre>
                                        <blockquote type="cite">
                                          <pre class="yiv5774932813moz-quote-pre">&nbsp;&nbsp;
&nbsp;&nbsp;
On Thu, Mar 11, 2021 at 0:03, Matthew Benstead

<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-rfc2396E" ymailto="mailto:matthewb@uvic.ca" target="_blank" href="mailto:matthewb@uvic.ca">&lt;matthewb@uvic.ca&gt;</a> wrote:


&nbsp;&nbsp;
&nbsp;&nbsp;
Hello,

I recently upgraded my Distributed-Replicate cluster from Gluster 7.9 to 8.3 on CentOS7 using the CentOS Storage SIG packages. I had geo-replication syncing properly before the upgrade, but not it is not working after.

After I had upgraded both master and slave clusters I attempted to start geo-replication again, but it goes to faulty quickly:

[root@storage01 ~]# gluster volume geo-replication storage  <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> start
Starting geo-replication session between storage &amp;&nbsp;&nbsp;<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful\

[root@storage01 ~]# gluster volume geo-replication status

MASTER NODE    MASTER VOL    MASTER BRICK               SLAVE USER    SLAVE                                        SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.0.231.91    storage       /data/storage_a/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.91    storage       /data/storage_c/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.91    storage       /data/storage_b/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.92    storage       /data/storage_b/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.92    storage       /data/storage_a/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.92    storage       /data/storage_c/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.93    storage       /data/storage_c/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.93    storage       /data/storage_b/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A
10.0.231.93    storage       /data/storage_a/storage    geoaccount     <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:ssh://geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>    N/A           Faulty    N/A             N/A

[root@storage01 ~]# gluster volume geo-replication storage  <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> stop
Stopping geo-replication session between storage &amp;&nbsp;&nbsp;<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:geoaccount@10.0.231.81::pcic-backup" target="_blank" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful


I went through the gsyncd logs and see it attempts to go back through the changlogs - which would make sense - but fails:

[2021-03-10 19:18:42.165807] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2021-03-10 19:18:42.166136] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_a/storage}, {slave_node=10.0.231.81}]
[2021-03-10 19:18:42.167829] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_c/storage}, {slave_node=10.0.231.82}]
[2021-03-10 19:18:42.172343] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2021-03-10 19:18:42.172580] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_b/storage}, {slave_node=10.0.231.82}]
[2021-03-10 19:18:42.235574] I [resource(worker /data/storage_c/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:42.236613] I [resource(worker /data/storage_a/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:42.238614] I [resource(worker /data/storage_b/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:44.144856] I [resource(worker /data/storage_b/storage):1436:connect_remote] SSH: SSH connection between master and slave established. [{duration=1.9059}]
[2021-03-10 19:18:44.145065] I [resource(worker /data/storage_b/storage):1116:connect] GLUSTER: Mounting gluster volume locally...
[2021-03-10 19:18:44.162873] I [resource(worker /data/storage_a/storage):1436:connect_remote] SSH: SSH connection between master and slave established. [{duration=1.9259}]
[2021-03-10 19:18:44.163412] I [resource(worker /data/storage_a/storage):1116:connect] GLUSTER: Mounting gluster volume locally...
[2021-03-10 19:18:44.167506] I [resource(worker /data/storage_c/storage):1436:connect_remote] SSH: SSH connection between master and slave established. [{duration=1.9316}]
[2021-03-10 19:18:44.167746] I [resource(worker /data/storage_c/storage):1116:connect] GLUSTER: Mounting gluster volume locally...
[2021-03-10 19:18:45.251372] I [resource(worker /data/storage_b/storage):1139:connect] GLUSTER: Mounted gluster volume [{duration=1.1062}]
[2021-03-10 19:18:45.251583] I [subcmds(worker /data/storage_b/storage):84:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor
[2021-03-10 19:18:45.271950] I [resource(worker /data/storage_c/storage):1139:connect] GLUSTER: Mounted gluster volume [{duration=1.1041}]
[2021-03-10 19:18:45.272118] I [subcmds(worker /data/storage_c/storage):84:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor
[2021-03-10 19:18:45.275180] I [resource(worker /data/storage_a/storage):1139:connect] GLUSTER: Mounted gluster volume [{duration=1.1116}]
[2021-03-10 19:18:45.275361] I [subcmds(worker /data/storage_a/storage):84:subcmd_worker] &lt;top&gt;: Worker spawn successful. Acknowledging back to monitor
[2021-03-10 19:18:47.265618] I [master(worker /data/storage_b/storage):1645:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage}]
[2021-03-10 19:18:47.265954] I [resource(worker /data/storage_b/storage):1292:service_loop] GLUSTER: Register time [{time=1615403927}]
[2021-03-10 19:18:47.276746] I [gsyncdstatus(worker /data/storage_b/storage):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2021-03-10 19:18:47.281194] I [gsyncdstatus(worker /data/storage_b/storage):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2021-03-10 19:18:47.281404] I [master(worker /data/storage_b/storage):1559:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1614666552, 0)}, {entry_stime=(1614664113, 0)}, {etime=1615403927}]
[2021-03-10 19:18:47.285340] I [master(worker /data/storage_c/storage):1645:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage}]
[2021-03-10 19:18:47.285579] I [resource(worker /data/storage_c/storage):1292:service_loop] GLUSTER: Register time [{time=1615403927}]
[2021-03-10 19:18:47.287383] I [master(worker /data/storage_a/storage):1645:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage}]
[2021-03-10 19:18:47.287697] I [resource(worker /data/storage_a/storage):1292:service_loop] GLUSTER: Register time [{time=1615403927}]
[2021-03-10 19:18:47.298415] I [gsyncdstatus(worker /data/storage_c/storage):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2021-03-10 19:18:47.301342] I [gsyncdstatus(worker /data/storage_a/storage):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2021-03-10 19:18:47.304183] I [gsyncdstatus(worker /data/storage_c/storage):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2021-03-10 19:18:47.304418] I [master(worker /data/storage_c/storage):1559:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1614666552, 0)}, {entry_stime=(1614664108, 0)}, {etime=1615403927}]
[2021-03-10 19:18:47.305294] E [resource(worker /data/storage_c/storage):1312:service_loop] GLUSTER: Changelog History Crawl failed [{error=[Errno 0] Success}]
[2021-03-10 19:18:47.308124] I [gsyncdstatus(worker /data/storage_a/storage):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2021-03-10 19:18:47.308509] I [master(worker /data/storage_a/storage):1559:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1614666553, 0)}, {entry_stime=(1614664115, 0)}, {etime=1615403927}]
[2021-03-10 19:18:47.357470] E [resource(worker /data/storage_b/storage):1312:service_loop] GLUSTER: Changelog History Crawl failed [{error=[Errno 0] Success}]
[2021-03-10 19:18:47.383949] E [resource(worker /data/storage_a/storage):1312:service_loop] GLUSTER: Changelog History Crawl failed [{error=[Errno 0] Success}]
[2021-03-10 19:18:48.255340] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/data/storage_b/storage}]
[2021-03-10 19:18:48.260052] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]
[2021-03-10 19:18:48.275651] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/data/storage_c/storage}]
[2021-03-10 19:18:48.278064] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/data/storage_a/storage}]
[2021-03-10 19:18:48.280453] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]
[2021-03-10 19:18:48.282274] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]
[2021-03-10 19:18:58.275702] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2021-03-10 19:18:58.276041] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_b/storage}, {slave_node=10.0.231.82}]
[2021-03-10 19:18:58.296252] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2021-03-10 19:18:58.296506] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_c/storage}, {slave_node=10.0.231.82}]
[2021-03-10 19:18:58.301290] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2021-03-10 19:18:58.301521] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/data/storage_a/storage}, {slave_node=10.0.231.81}]
[2021-03-10 19:18:58.345817] I [resource(worker /data/storage_b/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:58.361268] I [resource(worker /data/storage_c/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:58.367985] I [resource(worker /data/storage_a/storage):1387:connect_remote] SSH: Initializing SSH connection between master and slave...
[2021-03-10 19:18:59.115143] I [subcmds(monitor-status):29:subcmd_monitor_status] &lt;top&gt;: Monitor Status Change [{status=Stopped}]

It seems like there is an issue selecting the changelogs - perhaps similar to this issue?  <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://github.com/gluster/glusterfs/issues/1766">https://github.com/gluster/glusterfs/issues/1766</a>

[root@storage01 storage_10.0.231.81_pcic-backup]# cat changes-data-storage_a-storage.log
[2021-03-10 19:18:45.284764] I [MSGID: 132028] [gf-changelog.c:577:gf_changelog_register_generic] 0-gfchangelog: Registering brick [{brick=/data/storage_a/storage}, {notify_filter=1}]
[2021-03-10 19:18:45.285275] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=3}]
[2021-03-10 19:18:45.285269] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=2}]
[2021-03-10 19:18:45.286615] I [socket.c:929:__socket_server_bind] 0-socket.gfchangelog: closing (AF_UNIX) reuse check socket 21
[2021-03-10 19:18:47.308607] I [MSGID: 132035] [gf-history-changelog.c:837:gf_history_changelog] 0-gfchangelog: Requesting historical changelogs [{start=1614666553}, {end=1615403927}]
[2021-03-10 19:18:47.308659] I [MSGID: 132019] [gf-history-changelog.c:755:gf_changelog_extract_min_max] 0-gfchangelog: changelogs min max [{min=1597342860}, {max=1615403927}, {total_changelogs=1250878}]
[2021-03-10 19:18:47.383774] E [MSGID: 132009] [gf-history-changelog.c:941:gf_history_changelog] 0-gfchangelog: wrong result [{for=end}, {start=1615403927}, {idx=1250877}]

[root@storage01 storage_10.0.231.81_pcic-backup]# tail -7 changes-data-storage_b-storage.log
[2021-03-10 19:18:45.263211] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=3}]
[2021-03-10 19:18:45.263151] I [MSGID: 132028] [gf-changelog.c:577:gf_changelog_register_generic] 0-gfchangelog: Registering brick [{brick=/data/storage_b/storage}, {notify_filter=1}]
[2021-03-10 19:18:45.263294] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=2}]
[2021-03-10 19:18:45.264598] I [socket.c:929:__socket_server_bind] 0-socket.gfchangelog: closing (AF_UNIX) reuse check socket 23
[2021-03-10 19:18:47.281499] I [MSGID: 132035] [gf-history-changelog.c:837:gf_history_changelog] 0-gfchangelog: Requesting historical changelogs [{start=1614666552}, {end=1615403927}]
[2021-03-10 19:18:47.281551] I [MSGID: 132019] [gf-history-changelog.c:755:gf_changelog_extract_min_max] 0-gfchangelog: changelogs min max [{min=1597342860}, {max=1615403927}, {total_changelogs=1258258}]
[2021-03-10 19:18:47.357244] E [MSGID: 132009] [gf-history-changelog.c:941:gf_history_changelog] 0-gfchangelog: wrong result [{for=end}, {start=1615403927}, {idx=1258257}]

Any ideas on where to debug this? I'd prefer not to have to remove and re-sync everything as there is about 240TB on the cluster...

Thanks,
 -Matthew


________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a>
Gluster-users mailing list
<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-abbreviated" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a rel="nofollow noopener noreferrer" shape="rect" class="yiv5774932813moz-txt-link-freetext" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>


</pre>
                                        </blockquote>
                                      </blockquote>
                                    </blockquote>
                                    <br clear="none">
                                  </div>
                                </div>
                              </div>
                            </blockquote>
                          </div>
                        </div>
                      </div>
                    </blockquote>
                  </div>
                  <br clear="none">
                </div>
              </div>
            </div>
          </blockquote>
        </div>
      </div>
    </blockquote></div>
    <br clear="none">
  </div></div> </div> </blockquote></div>