<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
Hello, <br>
<br>
I recently upgraded my Distributed-Replicate cluster from Gluster
7.9 to 8.3 on CentOS7 using the CentOS Storage SIG packages. I had
geo-replication syncing properly before the upgrade, but not it is
not working after. <br>
<br>
After I had upgraded both master and slave clusters I attempted to
start geo-replication again, but it goes to faulty quickly: <br>
<br>
<font face="monospace">[root@storage01 ~]# gluster volume
geo-replication storage <a class="moz-txt-link-abbreviated" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> start<br>
Starting geo-replication session between storage &
<a class="moz-txt-link-abbreviated" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful\<br>
<br>
[root@storage01 ~]# gluster volume geo-replication status <br>
<br>
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE
STATUS CRAWL STATUS LAST_SYNCED <br>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
10.0.231.91 storage /data/storage_a/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.91 storage /data/storage_c/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.91 storage /data/storage_b/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.92 storage /data/storage_b/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.92 storage /data/storage_a/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.92 storage /data/storage_c/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.93 storage /data/storage_c/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.93 storage /data/storage_b/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
10.0.231.93 storage /data/storage_a/storage
geoaccount <a class="moz-txt-link-abbreviated" href="mailto:ssh://geoaccount@10.0.231.81::pcic-backup">ssh://geoaccount@10.0.231.81::pcic-backup</a>
N/A Faulty N/A N/A <br>
<br>
[root@storage01 ~]# gluster volume geo-replication storage
<a class="moz-txt-link-abbreviated" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> stop<br>
Stopping geo-replication session between storage &
<a class="moz-txt-link-abbreviated" href="mailto:geoaccount@10.0.231.81::pcic-backup">geoaccount@10.0.231.81::pcic-backup</a> has been successful</font><br>
<br>
<br>
I went through the gsyncd logs and see it attempts to go back
through the changlogs - which would make sense - but fails: <br>
<font face="monospace"><br>
[2021-03-10 19:18:42.165807] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Initializing...}]<br>
[2021-03-10 19:18:42.166136] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_a/storage},
{slave_node=10.0.231.81}]<br>
[2021-03-10 19:18:42.167829] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_c/storage},
{slave_node=10.0.231.82}]<br>
[2021-03-10 19:18:42.172343] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Initializing...}]<br>
[2021-03-10 19:18:42.172580] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_b/storage},
{slave_node=10.0.231.82}]<br>
[2021-03-10 19:18:42.235574] I [resource(worker
/data/storage_c/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:42.236613] I [resource(worker
/data/storage_a/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:42.238614] I [resource(worker
/data/storage_b/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:44.144856] I [resource(worker
/data/storage_b/storage):1436:connect_remote] SSH: SSH connection
between master and slave established. [{duration=1.9059}]<br>
[2021-03-10 19:18:44.145065] I [resource(worker
/data/storage_b/storage):1116:connect] GLUSTER: Mounting gluster
volume locally...<br>
[2021-03-10 19:18:44.162873] I [resource(worker
/data/storage_a/storage):1436:connect_remote] SSH: SSH connection
between master and slave established. [{duration=1.9259}]<br>
[2021-03-10 19:18:44.163412] I [resource(worker
/data/storage_a/storage):1116:connect] GLUSTER: Mounting gluster
volume locally...<br>
[2021-03-10 19:18:44.167506] I [resource(worker
/data/storage_c/storage):1436:connect_remote] SSH: SSH connection
between master and slave established. [{duration=1.9316}]<br>
[2021-03-10 19:18:44.167746] I [resource(worker
/data/storage_c/storage):1116:connect] GLUSTER: Mounting gluster
volume locally...<br>
[2021-03-10 19:18:45.251372] I [resource(worker
/data/storage_b/storage):1139:connect] GLUSTER: Mounted gluster
volume [{duration=1.1062}]<br>
[2021-03-10 19:18:45.251583] I [subcmds(worker
/data/storage_b/storage):84:subcmd_worker] <top>: Worker
spawn successful. Acknowledging back to monitor<br>
[2021-03-10 19:18:45.271950] I [resource(worker
/data/storage_c/storage):1139:connect] GLUSTER: Mounted gluster
volume [{duration=1.1041}]<br>
[2021-03-10 19:18:45.272118] I [subcmds(worker
/data/storage_c/storage):84:subcmd_worker] <top>: Worker
spawn successful. Acknowledging back to monitor<br>
[2021-03-10 19:18:45.275180] I [resource(worker
/data/storage_a/storage):1139:connect] GLUSTER: Mounted gluster
volume [{duration=1.1116}]<br>
[2021-03-10 19:18:45.275361] I [subcmds(worker
/data/storage_a/storage):84:subcmd_worker] <top>: Worker
spawn successful. Acknowledging back to monitor<br>
[2021-03-10 19:18:47.265618] I [master(worker
/data/storage_b/storage):1645:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_b-storage}]<br>
[2021-03-10 19:18:47.265954] I [resource(worker
/data/storage_b/storage):1292:service_loop] GLUSTER: Register time
[{time=1615403927}]<br>
[2021-03-10 19:18:47.276746] I [gsyncdstatus(worker
/data/storage_b/storage):281:set_active] GeorepStatus: Worker
Status Change [{status=Active}]<br>
[2021-03-10 19:18:47.281194] I [gsyncdstatus(worker
/data/storage_b/storage):253:set_worker_crawl_status]
GeorepStatus: Crawl Status Change [{status=History Crawl}]<br>
[2021-03-10 19:18:47.281404] I [master(worker
/data/storage_b/storage):1559:crawl] _GMaster: starting history
crawl [{turns=1}, {stime=(1614666552, 0)},
{entry_stime=(1614664113, 0)}, {etime=1615403927}]<br>
[2021-03-10 19:18:47.285340] I [master(worker
/data/storage_c/storage):1645:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_c-storage}]<br>
[2021-03-10 19:18:47.285579] I [resource(worker
/data/storage_c/storage):1292:service_loop] GLUSTER: Register time
[{time=1615403927}]<br>
[2021-03-10 19:18:47.287383] I [master(worker
/data/storage_a/storage):1645:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/storage_10.0.231.81_pcic-backup/data-storage_a-storage}]<br>
[2021-03-10 19:18:47.287697] I [resource(worker
/data/storage_a/storage):1292:service_loop] GLUSTER: Register time
[{time=1615403927}]<br>
[2021-03-10 19:18:47.298415] I [gsyncdstatus(worker
/data/storage_c/storage):281:set_active] GeorepStatus: Worker
Status Change [{status=Active}]<br>
[2021-03-10 19:18:47.301342] I [gsyncdstatus(worker
/data/storage_a/storage):281:set_active] GeorepStatus: Worker
Status Change [{status=Active}]<br>
[2021-03-10 19:18:47.304183] I [gsyncdstatus(worker
/data/storage_c/storage):253:set_worker_crawl_status]
GeorepStatus: Crawl Status Change [{status=History Crawl}]<br>
[2021-03-10 19:18:47.304418] I [master(worker
/data/storage_c/storage):1559:crawl] _GMaster: starting history
crawl [{turns=1}, {stime=(1614666552, 0)},
{entry_stime=(1614664108, 0)}, {etime=1615403927}]<br>
[2021-03-10 19:18:47.305294] E [resource(worker
/data/storage_c/storage):1312:service_loop] GLUSTER: Changelog
History Crawl failed [{error=[Errno 0] Success}]<br>
[2021-03-10 19:18:47.308124] I [gsyncdstatus(worker
/data/storage_a/storage):253:set_worker_crawl_status]
GeorepStatus: Crawl Status Change [{status=History Crawl}]<br>
[2021-03-10 19:18:47.308509] I [master(worker
/data/storage_a/storage):1559:crawl] _GMaster: starting history
crawl [{turns=1}, {stime=(1614666553, 0)},
{entry_stime=(1614664115, 0)}, {etime=1615403927}]<br>
[2021-03-10 19:18:47.357470] E [resource(worker
/data/storage_b/storage):1312:service_loop] GLUSTER: Changelog
History Crawl failed [{error=[Errno 0] Success}]<br>
[2021-03-10 19:18:47.383949] E [resource(worker
/data/storage_a/storage):1312:service_loop] GLUSTER: Changelog
History Crawl failed [{error=[Errno 0] Success}]<br>
[2021-03-10 19:18:48.255340] I [monitor(monitor):228:monitor]
Monitor: worker died in startup phase
[{brick=/data/storage_b/storage}]<br>
[2021-03-10 19:18:48.260052] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Faulty}]<br>
[2021-03-10 19:18:48.275651] I [monitor(monitor):228:monitor]
Monitor: worker died in startup phase
[{brick=/data/storage_c/storage}]<br>
[2021-03-10 19:18:48.278064] I [monitor(monitor):228:monitor]
Monitor: worker died in startup phase
[{brick=/data/storage_a/storage}]<br>
[2021-03-10 19:18:48.280453] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Faulty}]<br>
[2021-03-10 19:18:48.282274] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Faulty}]<br>
[2021-03-10 19:18:58.275702] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Initializing...}]<br>
[2021-03-10 19:18:58.276041] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_b/storage},
{slave_node=10.0.231.82}]<br>
[2021-03-10 19:18:58.296252] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Initializing...}]<br>
[2021-03-10 19:18:58.296506] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_c/storage},
{slave_node=10.0.231.82}]<br>
[2021-03-10 19:18:58.301290] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker
Status Change [{status=Initializing...}]<br>
[2021-03-10 19:18:58.301521] I [monitor(monitor):160:monitor]
Monitor: starting gsyncd worker [{brick=/data/storage_a/storage},
{slave_node=10.0.231.81}]<br>
[2021-03-10 19:18:58.345817] I [resource(worker
/data/storage_b/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:58.361268] I [resource(worker
/data/storage_c/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:58.367985] I [resource(worker
/data/storage_a/storage):1387:connect_remote] SSH: Initializing
SSH connection between master and slave...<br>
[2021-03-10 19:18:59.115143] I
[subcmds(monitor-status):29:subcmd_monitor_status] <top>:
Monitor Status Change [{status=Stopped}]</font><br>
<br>
It seems like there is an issue selecting the changelogs - perhaps
similar to this issue?
<a class="moz-txt-link-freetext" href="https://github.com/gluster/glusterfs/issues/1766">https://github.com/gluster/glusterfs/issues/1766</a><br>
<font face="monospace"><br>
[root@storage01 storage_10.0.231.81_pcic-backup]# cat
changes-data-storage_a-storage.log<br>
[2021-03-10 19:18:45.284764] I [MSGID: 132028]
[gf-changelog.c:577:gf_changelog_register_generic] 0-gfchangelog:
Registering brick [{brick=/data/storage_a/storage},
{notify_filter=1}] <br>
[2021-03-10 19:18:45.285275] I [MSGID: 101190]
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=3}] <br>
[2021-03-10 19:18:45.285269] I [MSGID: 101190]
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=2}] <br>
[2021-03-10 19:18:45.286615] I [socket.c:929:__socket_server_bind]
0-socket.gfchangelog: closing (AF_UNIX) reuse check socket 21<br>
[2021-03-10 19:18:47.308607] I [MSGID: 132035]
[gf-history-changelog.c:837:gf_history_changelog] 0-gfchangelog:
Requesting historical changelogs [{start=1614666553},
{end=1615403927}] <br>
[2021-03-10 19:18:47.308659] I [MSGID: 132019]
[gf-history-changelog.c:755:gf_changelog_extract_min_max]
0-gfchangelog: changelogs min max [{min=1597342860},
{max=1615403927}, {total_changelogs=1250878}] <br>
[2021-03-10 19:18:47.383774] E [MSGID: 132009]
[gf-history-changelog.c:941:gf_history_changelog] 0-gfchangelog:
wrong result [{for=end}, {start=1615403927}, {idx=1250877}] <br>
<br>
[root@storage01 storage_10.0.231.81_pcic-backup]# tail -7
changes-data-storage_b-storage.log<br>
[2021-03-10 19:18:45.263211] I [MSGID: 101190]
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=3}] <br>
[2021-03-10 19:18:45.263151] I [MSGID: 132028]
[gf-changelog.c:577:gf_changelog_register_generic] 0-gfchangelog:
Registering brick [{brick=/data/storage_b/storage},
{notify_filter=1}] <br>
[2021-03-10 19:18:45.263294] I [MSGID: 101190]
[event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=2}] <br>
[2021-03-10 19:18:45.264598] I [socket.c:929:__socket_server_bind]
0-socket.gfchangelog: closing (AF_UNIX) reuse check socket 23<br>
[2021-03-10 19:18:47.281499] I [MSGID: 132035]
[gf-history-changelog.c:837:gf_history_changelog] 0-gfchangelog:
Requesting historical changelogs [{start=1614666552},
{end=1615403927}] <br>
[2021-03-10 19:18:47.281551] I [MSGID: 132019]
[gf-history-changelog.c:755:gf_changelog_extract_min_max]
0-gfchangelog: changelogs min max [{min=1597342860},
{max=1615403927}, {total_changelogs=1258258}] <br>
[2021-03-10 19:18:47.357244] E [MSGID: 132009]
[gf-history-changelog.c:941:gf_history_changelog] 0-gfchangelog:
wrong result [{for=end}, {start=1615403927}, {idx=1258257}] </font><br>
<br>
Any ideas on where to debug this? I'd prefer not to have to remove
and re-sync everything as there is about 240TB on the cluster...<br>
<br>
Thanks,<br>
-Matthew<br>
<font size="-1">
</font>
</body>
</html>