[Gluster-users] Replication does not replicate the reference node

Krzysztof Strasburger strasbur at chkw386.ch.pwr.wroc.pl
Fri Nov 6 08:33:21 UTC 2009

On Fri, Nov 06, 2009 at 01:27:03PM +0530, Vikas Gorur wrote:
> >volume mega-replicator
> >       type       cluster/replicate
> >       subvolumes zbrick1 zbrick2 zbrick3
> >end-volume
> >3) I do exactly the same thing with zbrick1, when I mount again the 
> >zbrick1,
> >I do a ls -Rl on the mount point and there is nothing, the directory is
> >empty.
> >    So I do a ls -Rl on the zbrick2 and on zbrick3, and I get the same
> >result, nothing on the mount point.
> This is a known issue for now:
> http://gluster.com/community/documentation/index.php/Understanding_AFR_Translator#Self-heal_of_a_file_that_does_not_exist_on_the_first_subvolume
Thanks, Vikas! This makes it finally clear for me, also with the problem of
not healing the namespace of a unified volume. A short question to the developers
(sorry, if it is somewhere in the manuals), if they read it: Say, the same
(replicated) volume is mounted on many hosts. If the subvolumes would be
reshuffled, i. e., on host1 we would have:

volume replicated
 type cluster/replicate
 subvolumes sub1 sub2

and on host2:

volume replicated
 type cluster/replicate
 subvolumes sub2 sub1

then following (positive) side-effects should occur:
1. After a crash, ls -R would correctly self-heal the volume either on host1
   or on host2 (on this one, which has the newer sobvolume as the first
   on the list).
2. This is probably almost invisible, but the directory-read workload should
   be more equally distributed between sub1 and sub2.
Is this the right workaround?
If it would work, it would be sufficient to modify your mega-replicator entry:
subvolumes zbrick2 zbrick1 zbrick3


More information about the Gluster-users mailing list