[Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?

Anand Avati anand.avati at gmail.com
Mon Jul 9 17:49:18 UTC 2012


Was this the client log or the glustershd log?

Thanks,
Avati

On Mon, Jul 9, 2012 at 8:23 AM, Jake Grimmett <jog at mrc-lmb.cam.ac.uk> wrote:

> Hi Fernando / Christian,
>
> Many thanks for getting back to me.
>
> Slow writes are acceptable; most of our VM's are small web servers with
> low traffic. My aim is to have a fully self-contained two server KVM
> cluster with live migration, no external storage and the ability to reboot
> either node with zero VM downtime.  We seem to be "almost there", bar a
> hiccup when the self-heal is in progress and some minor grumbles from
> sanlock (which might be fixed by the new sanlock in RHEL 6.3)
>
> Incidentally, the logs shows a "diff" self heal on a node reboot:
>
> [2012-07-09 16:04:06.743512] I [afr-self-heal-algorithm.c:**122:sh_loop_driver_done]
> 0-gluster-rep-replicate-0: diff self-heal on /box1-clone2.img: completed.
> (16 blocks of 16974 were different (0.09%))
>
> So, does this log show "Granular locking" occurring, or does it just
> happen transparently when a file exceeds a certain size?
>
> many thanks
>
> Jake
>
>
>
> On 07/09/2012 04:01 PM, Fernando Frediani (Qube) wrote:
>
>> Jake,
>>
>> I haven’t had a chanced to test with my KVM cluster yet but it should be
>> a default things from 3.3.
>>
>> Just be in mind that running Virtual Machines is NOT a supported things
>> for Redhat Storage server according to Redhat Sales people. They said
>> towards the end of the year. As you might have observed performance
>> specially for write isn’t any near fantastic.
>>
>>
>> Fernando
>>
>> *From:*gluster-users-bounces@**gluster.org<gluster-users-bounces at gluster.org>
>> [mailto:gluster-users-bounces@**gluster.org<gluster-users-bounces at gluster.org>]
>> *On Behalf Of *Christian Wittwer
>> *Sent:* 09 July 2012 15:51
>> *To:* Jake Grimmett
>> *Cc:* gluster-users at gluster.org
>> *Subject:* Re: [Gluster-users] "Granular locking" - does this need to be
>>
>> enabled in 3.3.0 ?
>>
>> Hi Jake
>>
>> I can confirm exact the same behaviour with gluster 3.3.0 on Ubuntu
>> 12.04. During the self-heal process the VM gets 100% I/O wait and is
>> locked.
>>
>> After the self-heal the root filesystem was read-only which forced me to
>> do a reboot and fsck.
>>
>> Cheers,
>>
>> Christian
>>
>> 2012/7/9 Jake Grimmett <jog at mrc-lmb.cam.ac.uk
>> <mailto:jog at mrc-lmb.cam.ac.uk>**>
>>
>>
>> Dear All,
>>
>> I have a pair of Scientific Linux 6.2 servers, acting as KVM
>> virtualisation hosts for ~30 VM's. The VM images are stored in a
>> replicated gluster volume shared between the two servers. Live migration
>> works fine, and the sanlock prevents me from (stupidly) starting the
>> same VM on both machines. Each server has 10GB ethernet and a 10 disk
>> RAID5 array.
>>
>> If I migrate all the VM's to server #1 and shutdown server #2, all works
>> perfectly with no interruption. When I restart server #2, the VM's
>> freeze while the self-heal process is running - and this healing can
>> take a long time.
>>
>> I'm not sure if "Granular Locking" is on. It's listed as a "technology
>> preview" in the Redhat Storage server 2 notes - do I need to do anything
>> to enable it?
>>
>> i.e. set "cluster.data-self-heal-**algorithm" to diff ?
>> or edit "cluster.self-heal-window-**size" ?
>>
>> any tips from other people doing similar much appreciated!
>>
>> Many thanks,
>>
>> Jake
>>
>> jog <---at---> mrc-lmb.cam.ac.uk <http://mrc-lmb.cam.ac.uk>
>> ______________________________**_________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.**org<Gluster-users at gluster.org>
>> >
>> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>>
>>
>
> --
> Dr Jake Grimmett
> Head Of Scientific Computing
> MRC Laboratory of Molecular Biology
> Hills Road, Cambridge, CB2 0QH, UK.
> Phone 01223 402219
> Mobile 0776 9886539
>
> ______________________________**_________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/**mailman/listinfo/gluster-users<http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120709/908744e2/attachment.html>


More information about the Gluster-users mailing list