[Gluster-users] Replication does not replicate the reference node

Benjamin Long Benjamin.Long at longbros.com
Fri Nov 6 15:44:04 UTC 2009


On Friday 06 November 2009 04:48:08 am Vikas Gorur wrote:
> Krzysztof Strasburger wrote:
> > volume replicated
> >  type cluster/replicate
> >  subvolumes sub1 sub2
> > end-volume
> >
> > and on host2:
> >
> > volume replicated
> >  type cluster/replicate
> >  subvolumes sub2 sub1
> > end-volume
> >
> > then following (positive) side-effects should occur:
> > 1. After a crash, ls -R would correctly self-heal the volume either on
> > host1 or on host2 (on this one, which has the newer sobvolume as the
> > first on the list).
> > 2. This is probably almost invisible, but the directory-read workload
> > should be more equally distributed between sub1 and sub2.
> > Is this the right workaround.
> 
> This is not a workaround. Shuffling the order of subvolumes can have
> disastrous consequences.
> Replicate uses the first subvolume as the lock server, and if you
> shuffle the order
> the two clients will use different subvolumes as lock servers. This can
> cause data to be inconsistent.
> 
> We plan to fix this known issue, however, in one of the 3.x releases. If
> you need a workaround,
> the correct thing to do is generate a list of all files from the second
> subvolume like this:
> 
> [root at backend2] # find /export/directory/ > filelist.txt
> 
> Then trigger self heal on all the files from the mountpoint:
> 
> [root at mountpoint] # cat filelist.txt | xargs stat
> 
> This will recreate all the files.
> 
> Vikas
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 

What if you set the lock servers option to the number of sub-volumes you have? 

-- 
Benjamin Long



More information about the Gluster-users mailing list