[Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?

Serkan Çoban cobanserkan at gmail.com
Thu Feb 1 16:32:19 UTC 2018


You do not need to reset brick if brick path does not change. Replace
the brick format and mount, then gluster v start volname force.
To start self heal just run gluster v heal volname full.

On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote:
> Hi,
>
>
> My volume home is configured in replicate mode (version 3.12.4) with the bricks
> server1:/data/gluster/brick1
> server2:/data/gluster/brick1
>
> server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a
>> gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit force
>
> I was expecting that the self-heal daemon would start copying data from server1:/data/gluster/brick1
> (about 7.4 TB) to the empty server2:/data/gluster/brick1, which it only did for directories, but not for files.
>
> For the moment, I launched on the fuse mount point
>> find . | xargs stat
> but crawling the whole volume (100 TB) to trigger self-healing of a single brick of 7.4 TB is unefficient.
>
> Is there any trick to only self-heal a single brick, either by setting some attributes to its top directory, for example ?
>
>
> Many thanks,
>
>
> Alessandro
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list