[Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?
Alessandro Ipe
Alessandro.Ipe at meteo.be
Thu Feb 1 15:39:41 UTC 2018
Hi,
My volume home is configured in replicate mode (version 3.12.4) with the bricks
server1:/data/gluster/brick1
server2:/data/gluster/brick1
server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a
> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit force
I was expecting that the self-heal daemon would start copying data from server1:/data/gluster/brick1
(about 7.4 TB) to the empty server2:/data/gluster/brick1, which it only did for directories, but not for files.
For the moment, I launched on the fuse mount point
> find . | xargs stat
but crawling the whole volume (100 TB) to trigger self-healing of a single brick of 7.4 TB is unefficient.
Is there any trick to only self-heal a single brick, either by setting some attributes to its top directory, for example ?
Many thanks,
Alessandro
More information about the Gluster-users
mailing list