[Gluster-devel] Gluster 3.3 / Stripe+Replicat / Healing+Locking on VM's

Gareth Bult gareth at encryptec.net
Fri Oct 5 16:18:39 UTC 2012


Hi, 

I was under the impression that under Gluster 3.3 the self-heal process would only lock parts of files as required 
during the self-heal process ?? 

I have a 3.3 setup running here and earlier rebooted one of the storage nodes. Replication meant that the volume 
holding around 20 VM's (400G's) kept on running quite happily. However, when Gluster restarted and kicked off 
it's self heal, it queued and LOCKED all 20 VM's, unlocking images as the healing process finished on each one, 
over a period of a number of hours (!) 

I'm using 3.3.0 release from the semiosis PPA on Ubuntu 12.04. 

Is there a trick to making this work properly, or is there a fix due out that will correct this behaviour ?? 

tia 
Gareth. 

volume enc-client-0 
type protocol/client 
option remote-host 10.1.0.1 
option remote-subvolume /srv/enc 
option transport-type tcp 
option username *** 
option password *** 
end-volume 

volume enc-client-1 
type protocol/client 
option remote-host 10.2.0.4 
option remote-subvolume /srv/enc 
option transport-type tcp 
option username *** 
option password *** 
end-volume 

volume enc-client-2 
type protocol/client 
option remote-host 10.2.0.3 
option remote-subvolume /srv/enc 
option transport-type tcp 
option username *** 
option password *** 
end-volume 

volume encr-client-3 
type protocol/client 
option remote-host 10.1.0.2 
option remote-subvolume /srv/enc 
option transport-type tcp 
option username *** 
option password *** 
end-volume 

volume enc-replicate-0 
type cluster/replicate 
option background-self-heal-count 0 
option metadata-self-heal on 
option data-self-heal on 
option entry-self-heal on 
option self-heal-daemon on 
option iam-self-heal-daemon yes 
subvolumes enc-client-0 enc-client-1 
end-volume 

volume enc-replicate-1 
type cluster/replicate 
option background-self-heal-count 0 
option metadata-self-heal on 
option data-self-heal on 
option entry-self-heal on 
option self-heal-daemon on 
option iam-self-heal-daemon yes 
subvolumes enc-client-2 enc-client-3 
end-volume 
# 
volume glustershd 
type debug/io-stats 
subvolumes enc-replicate-0 enc-replicate-1 
end-volume 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20121005/b8e33006/attachment-0001.html>


More information about the Gluster-devel mailing list