[Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?
Pranith Kumar Karampuri
pkarampu at redhat.com
Tue Jul 10 03:44:16 UTC 2012
Granular locking is the only way data-self-heal is performed at the moment. Could you give us the steps to re-create this issue, so that we can test this scenario locally. I will raise a bug with the info you provide.
This is roughly the info I am looking for:
1) What is the size of each VM. (Number of VMs: 30 as per your mail)
2) What is the kind of load in the VM. You said small web-servers with low traffic, What kind of traffic is it? Writes(Uploads of files), Reads etc.
3) Steps leading to the hang.
4) If you think you can re-create the issue, can you post the statedumps of the brick processes and the mount process when the hangs appear.
----- Original Message -----
From: "Jake Grimmett" <jog at mrc-lmb.cam.ac.uk>
To: "Anand Avati" <anand.avati at gmail.com>
Cc: "Jake Grimmett" <jog at mrc-lmb.cam.ac.uk>, gluster-users at gluster.org
Sent: Monday, July 9, 2012 11:51:19 PM
Subject: Re: [Gluster-users] "Granular locking" - does this need to be enabled in 3.3.0 ?
This is one entry (of many) in the client log when bringing my second node
of the cluster back up, the glustershd.log is completely silent at this
If your interested in seeing the nodes split & reconnect, the relevant
glustershd.log section is at http://pastebin.com/0Va3RxDD
> Was this the client log or the glustershd log?
> On Mon, Jul 9, 2012 at 8:23 AM, Jake Grimmett <jog at mrc-lmb.cam.ac.uk>
>> Hi Fernando / Christian,
>> Many thanks for getting back to me.
>> Slow writes are acceptable; most of our VM's are small web servers with
>> low traffic. My aim is to have a fully self-contained two server KVM
>> cluster with live migration, no external storage and the ability to
>> either node with zero VM downtime. We seem to be "almost there", bar a
>> hiccup when the self-heal is in progress and some minor grumbles from
>> sanlock (which might be fixed by the new sanlock in RHEL 6.3)
>> Incidentally, the logs shows a "diff" self heal on a node reboot:
>> [2012-07-09 16:04:06.743512] I
>> 0-gluster-rep-replicate-0: diff self-heal on /box1-clone2.img:
>> (16 blocks of 16974 were different (0.09%))
>> So, does this log show "Granular locking" occurring, or does it just
>> happen transparently when a file exceeds a certain size?
>> many thanks
>> On 07/09/2012 04:01 PM, Fernando Frediani (Qube) wrote:
>>> I haven’t had a chanced to test with my KVM cluster yet but it should
>>> a default things from 3.3.
>>> Just be in mind that running Virtual Machines is NOT a supported things
>>> for Redhat Storage server according to Redhat Sales people. They said
>>> towards the end of the year. As you might have observed performance
>>> specially for write isn’t any near fantastic.
>>> *From:*gluster-users-bounces@**gluster.org<gluster-users-bounces at gluster.org>
>>> [mailto:gluster-users-bounces@**gluster.org<gluster-users-bounces at gluster.org>]
>>> *On Behalf Of *Christian Wittwer
>>> *Sent:* 09 July 2012 15:51
>>> *To:* Jake Grimmett
>>> *Cc:* gluster-users at gluster.org
>>> *Subject:* Re: [Gluster-users] "Granular locking" - does this need to
>>> enabled in 3.3.0 ?
>>> Hi Jake
>>> I can confirm exact the same behaviour with gluster 3.3.0 on Ubuntu
>>> 12.04. During the self-heal process the VM gets 100% I/O wait and is
>>> After the self-heal the root filesystem was read-only which forced me
>>> do a reboot and fsck.
>>> 2012/7/9 Jake Grimmett <jog at mrc-lmb.cam.ac.uk
>>> <mailto:jog at mrc-lmb.cam.ac.uk>**>
>>> Dear All,
>>> I have a pair of Scientific Linux 6.2 servers, acting as KVM
>>> virtualisation hosts for ~30 VM's. The VM images are stored in a
>>> replicated gluster volume shared between the two servers. Live
>>> works fine, and the sanlock prevents me from (stupidly) starting the
>>> same VM on both machines. Each server has 10GB ethernet and a 10 disk
>>> RAID5 array.
>>> If I migrate all the VM's to server #1 and shutdown server #2, all
>>> perfectly with no interruption. When I restart server #2, the VM's
>>> freeze while the self-heal process is running - and this healing can
>>> take a long time.
>>> I'm not sure if "Granular Locking" is on. It's listed as a "technology
>>> preview" in the Redhat Storage server 2 notes - do I need to do
>>> to enable it?
>>> i.e. set "cluster.data-self-heal-**algorithm" to diff ?
>>> or edit "cluster.self-heal-window-**size" ?
>>> any tips from other people doing similar much appreciated!
>>> Many thanks,
>>> jog <---at---> mrc-lmb.cam.ac.uk <http://mrc-lmb.cam.ac.uk>
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> <mailto:Gluster-users at gluster.**org<Gluster-users at gluster.org>
>> Dr Jake Grimmett
>> Head Of Scientific Computing
>> MRC Laboratory of Molecular Biology
>> Hills Road, Cambridge, CB2 0QH, UK.
>> Phone 01223 402219
>> Mobile 0776 9886539
>> Gluster-users mailing list
>> Gluster-users at gluster.org
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Hills Road, Cambridge, CB2 0QH, UK.
Phone 01223 402219
Mobile 0776 9886539
Gluster-users mailing list
Gluster-users at gluster.org
More information about the Gluster-users