[Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?

Fernando Frediani (Qube) fernando.frediani at qubenet.net
Mon Jul 9 15:01:16 UTC 2012


Jake,

I haven't had a chanced to test with my KVM cluster yet but it should be a default things from 3.3.
Just be in mind that running Virtual Machines is NOT a supported things for Redhat Storage server according to Redhat Sales people. They said towards the end of the year. As you might have observed performance specially for write isn't any near fantastic.

Fernando

From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Christian Wittwer
Sent: 09 July 2012 15:51
To: Jake Grimmett
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?

Hi Jake
I can confirm exact the same behaviour with gluster 3.3.0 on Ubuntu 12.04. During the self-heal process the VM gets 100% I/O wait and  is locked.
After the self-heal the root filesystem was read-only which forced me to do a reboot and fsck.

Cheers,
Christian
2012/7/9 Jake Grimmett <jog at mrc-lmb.cam.ac.uk<mailto:jog at mrc-lmb.cam.ac.uk>>
Dear All,

I have a pair of Scientific Linux 6.2 servers, acting as KVM virtualisation hosts for ~30 VM's. The VM images are stored in a replicated gluster volume shared between the two servers. Live migration works fine, and the sanlock prevents me from (stupidly) starting the same VM on both machines. Each server has 10GB ethernet and a 10 disk RAID5 array.

If I migrate all the VM's to server #1 and shutdown server #2, all works perfectly with no interruption. When I restart server #2, the VM's freeze while the self-heal process is running - and this healing can take a long time.

I'm not sure if "Granular Locking" is on. It's listed as a "technology preview" in the Redhat Storage server 2 notes - do I need to do anything to enable it?

i.e. set "cluster.data-self-heal-algorithm" to diff ?
or edit "cluster.self-heal-window-size" ?

any tips from other people doing similar much appreciated!

Many thanks,

Jake

jog <---at---> mrc-lmb.cam.ac.uk<http://mrc-lmb.cam.ac.uk>
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120709/0457ec4d/attachment.html>


More information about the Gluster-users mailing list