[Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?
Christian Wittwer
wittwerch at gmail.com
Mon Jul 9 14:51:15 UTC 2012
Hi Jake
I can confirm exact the same behaviour with gluster 3.3.0 on Ubuntu 12.04.
During the self-heal process the VM gets 100% I/O wait and is locked.
After the self-heal the root filesystem was read-only which forced me to do
a reboot and fsck.
Cheers,
Christian
2012/7/9 Jake Grimmett <jog at mrc-lmb.cam.ac.uk>
> Dear All,
>
> I have a pair of Scientific Linux 6.2 servers, acting as KVM
> virtualisation hosts for ~30 VM's. The VM images are stored in a replicated
> gluster volume shared between the two servers. Live migration works fine,
> and the sanlock prevents me from (stupidly) starting the same VM on both
> machines. Each server has 10GB ethernet and a 10 disk RAID5 array.
>
> If I migrate all the VM's to server #1 and shutdown server #2, all works
> perfectly with no interruption. When I restart server #2, the VM's freeze
> while the self-heal process is running - and this healing can take a long
> time.
>
> I'm not sure if "Granular Locking" is on. It's listed as a "technology
> preview" in the Redhat Storage server 2 notes - do I need to do anything to
> enable it?
>
> i.e. set "cluster.data-self-heal-**algorithm" to diff ?
> or edit "cluster.self-heal-window-**size" ?
>
> any tips from other people doing similar much appreciated!
>
> Many thanks,
>
> Jake
>
> jog <---at---> mrc-lmb.cam.ac.uk
> ______________________________**_________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120709/b8a7571e/attachment.html>
More information about the Gluster-users
mailing list