[Gluster-users] Sharding - what next?

Krutika Dhananjay kdhananj at redhat.com
Thu Dec 10 07:33:34 UTC 2015


----- Original Message -----

> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>, "Gluster Devel"
> <gluster-devel at gluster.org>, "gluster-users" <gluster-users at gluster.org>
> Sent: Wednesday, December 9, 2015 6:48:40 PM
> Subject: Re: Sharding - what next?

> Hi Guys, sorry for the late reply, my attention tends to be somewhat sporadic
> due to work and the large number of rescue dogs/cats I care for :)

> On 3/12/2015 8:34 PM, Krutika Dhananjay wrote:

> > We would love to hear from you on what you think of the feature and where
> > it
> > could be improved.
> 
> > Specifically, the following are the questions we are seeking feedback on:
> 
> > a) your experience testing sharding with VM store use-case - any bugs you
> > ran
> > into, any performance issues, etc
> 

> Testing was initially somewhat stressful as I regularly encountered file
> corruption. However I don't think that was due to bugs, rather incorrect
> settings for the VM usecase. Once I got that sorted out it has been very
> stable - I have really stressed failure modes we run into at work - nodes
> going down while heavy writes were happening. Live migrations during heals.
> gluster software being killed while VM were running on the host. So far its
> held up without a hitch.

> To that end, one thing I think should be made more obvious is the settings
> required for VM Hosting:

> > quick-read=off
> 
> > read-ahead=off
> 
> > io-cache=off
> 
> > stat-prefetch=off
> 
> > eager-lock=enable
> 
> > remote-dio=enable
> 
> > quorum-type=auto
> 
> > server-quorum-type=server
> 

> They are quite crucial and very easy to miss in the online docs. And they are
> only recommended with noo mention that you will corrupt KVM VM's if you live
> migrate them between gluster nodes without them set. Also the virt group is
> missing from the debian packages.
Hi Lindsay, 
Thanks for the feedback. I will get in touch with Humble to find out what can be done about the docs. 

> Setting them does seem to have slowed sequential writes by about 10% but I
> need to test that more.

> Something related - sharding is useful because it makes heals much more
> granular and hence faster. To that end it would be really useful if there
> was a heal info variant that gave a overview of the process - rather than
> list the shards that are being healed, just a aggregate total, e.g.

> $ gluster volume heal datastore1 status
> volume datastore1
> - split brain: 0
> - Wounded:65
> - healing:4

> It gives one a easy feeling of progress - heals aren't happening faster, but
> it would feel that way :)
There is a 'heal-info summary' command that is under review, written by Mohammed Ashiq @ http://review.gluster.org/#/c/12154/3 which prints the number of files that are yet to be healed. 
It could perhaps be enhanced to print files in split-brain and also files which are possibly being healed. Note that these counts are printed per brick. 
It does not print a single list of counts with aggregated values. Would that be something you would consider useful? 

> Also, it would be great if the heal info command could return faster,
> sometimes it takes over a minute.
Yeah, I think part of the problem could be eager-lock feature which is causing the GlusterFS client process to not relinquish the network lock on the file soon enough, causing the heal info utility to be blocked for longer duration. 
There is an enhancement Anuradha Talur is working on where heal-info would do away with taking locks altogether. Once that is in place, heal-info should return faster. 

-Krutika 

> Thanks for the great work,

> Lindsay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151210/ff1c6603/attachment.html>


More information about the Gluster-users mailing list