[Gluster-users] Geo-Replication Stuck In "Hybrid Crawl"

Strahil Nikolov hunter86_bg at yahoo.com
Tue Sep 14 03:28:04 UTC 2021


Do you have a lot of small files ?What is the bandwidth between source volume nodes and secondary nodes ?
Usually Hibrid crawl is when the geo-rep is in xsync mode and once all the data (at the moment of start) had been transferred,  geo-rep switches to changelog.

Best Regards,Strahil Nikolov
 
 
  On Tue, Sep 14, 2021 at 1:06, Boubacar Cisse<cboubacar at gmail.com> wrote:   Hi,

Yes, I have checked both /var/log/gluster/geo-replication/<primaryvol_slavenode_slavevol> (on primary nodes) and /var/log/gluster/geo-replication-slaves/<primaryvol_slavenode_slavevol> (on slave node) but not finding any relevant information despite the fact that I've set all log levels to DEBUG. Looked at gsyncd.logs and bricks logs. At this point, I'm not even certain geo-replication is actually working. df command on slave indicates that the volume's brick is being filled with data but can't figure out how to confirm that things are actually working but just slow. I have deleted the geo-replication session, reset the bricks and started a new session but still no luck. Data on volume is less than 20 GB but process has been stuck in "Hybrid Crawl" for over a week.


*** gsyncd.log on primary ***
[2021-09-13 21:53:08.4801] D [repce(worker /gfs2-data/brick):195:push] RepceClient: call 25808:140134674573056:1631569988.0047505 keep_alive({'version': (1, 0), 'uuid': '560520f1-d06a-47d9-af6d-153c68016e82', 'retval': 0, 'volume_mark': (1551463906, 939763), 'timeout': 1631570108},) ...
[2021-09-13 21:53:08.40326] D [repce(worker /gfs2-data/brick):215:__call__] RepceClient: call 25808:140134674573056:1631569988.0047505 keep_alive -> 23
[2021-09-13 21:53:11.200769] D [master(worker /gfs2-data/brick):554:crawlwrap] _GMaster: ... crawl #0 done, took 5.043846 seconds
[2021-09-13 21:53:11.237383] D [master(worker /gfs2-data/brick):578:crawlwrap] _GMaster: Crawl info     cluster_stime=61        brick_stime=(-1, 0)
[2021-09-13 21:53:16.240783] D [master(worker /gfs2-data/brick):554:crawlwrap] _GMaster: ... crawl #0 done, took 5.039845 seconds
[2021-09-13 21:53:16.642778] D [master(worker /gfs2-data/brick):578:crawlwrap] _GMaster: Crawl info     cluster_stime=61        brick_stime=(-1, 0)
[2021-09-13 21:53:21.647924] D [master(worker /gfs2-data/brick):554:crawlwrap] _GMaster: ... crawl #0 done, took 5.406957 seconds
[2021-09-13 21:53:21.648072] D [master(worker /gfs2-data/brick):560:crawlwrap] _GMaster: 0 crawls, 0 turns


*** gsyncd.log on slave *** [MESSAGE KEEPS REPEATING]
{'op': 'META', 'skip_entry': False, 'go': '.gfid/341e4a74-b783-4d03-b678-13cd83691ca2', 'stat': {'uid': 33, 'gid': 33, 'mode': 16877, 'atime': 1620058938.6504347, 'mtime': 1630466794.4308176}},
{'op': 'META', 'skip_entry': False, 'go': '.gfid/454dc70d-e57f-4166-b9b1-9dcbc88906ad', 'stat': {'uid': 33, 'gid': 33, 'mode': 16877, 'atime': 1625766157.52317, 'mtime': 1627944976.2114644}},
{'op': 'META', 'skip_entry': False, 'go': '.gfid/f7a63767-3ec3-444f-8890-f6bdc569317a', 'stat': {'uid': 33, 'gid': 33, 'mode': 16877, 'atime': 1623954033.0488186, 'mtime': 1630506668.4986405}},
{'op': 'META', 'skip_entry': False, 'go': '.gfid/e52237fc-d8a7-43e6-8e1d-3f66b6e17bed', 'stat': {'uid': 33, 'gid': 33, 'mode': 16877, 'atime': 1623689028.9785645, 'mtime': 1631113995.6731815}}]
[2021-09-13 21:31:51.388329] I [resource(slave media01/gfs2-data/brick):1098:connect] GLUSTER: Mounting gluster volume locally...
[2021-09-13 21:31:51.490466] D [resource(slave media01/gfs2-data/brick):872:inhibit] MountbrokerMounter: auxiliary glusterfs mount in place
[2021-09-13 21:31:52.579018] D [resource(slave media01/gfs2-data/brick):939:inhibit] MountbrokerMounter: Lazy umount done: /var/mountbroker-root/mb_hive/mntWa5v9P
[2021-09-13 21:31:52.579506] D [resource(slave media01/gfs2-data/brick):946:inhibit] MountbrokerMounter: auxiliary glusterfs mount prepared
[2021-09-13 21:31:52.579624] I [resource(slave media01/gfs2-data/brick):1121:connect] GLUSTER: Mounted gluster volume   duration=1.1912
[2021-09-13 21:31:52.580047] I [resource(slave media01/gfs2-data/brick):1148:service_loop] GLUSTER: slave listening

Regards,

-Boubacar

On Mon, Sep 13, 2021 at 7:53 AM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

Did you check the logs on the primary nodes /var/log/gluster/geo-replication/<primaryvol_slavenode_slavevol>/ ?
Best Regards,Strahil Nikolov
 
 
  On Mon, Sep 13, 2021 at 14:55, Boubacar Cisse<cboubacar at gmail.com> wrote:   Currently using gluster 6.10 and have configured geo replication but crawl status has been stuck in "Hybrid Crawl" for weeks now. Can't find any potential issues in logs and data appears to be transferred even though extremely slowly. Any suggestions on what else to look for to help troubleshoot this issue? Any help will be appreciated.
root at host01:~# gluster --version
glusterfs 6.10
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

root at host01:~# gluster volume geo-replication gfs1 geo-user at host03::gfs1 status

MASTER NODE    MASTER VOL    MASTER BRICK        SLAVE USER    SLAVE                          SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------
host01        gfs1          /gfs1-data/brick    geo-user      geo-user at host03::gfs1    host03       Active     Hybrid Crawl    N/A
host02        gfs1          /gfs1-data/brick    geo-user      geo-user at host03::gfs1    host03       Passive    N/A             N/A

Regards
-Boubacar________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  

  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210914/4c6cc1b3/attachment.html>


More information about the Gluster-users mailing list