[Gluster-users] Shard Volume testing (3.7.5)

Krutika Dhananjay kdhananj at redhat.com
Mon Oct 26 04:54:50 UTC 2015


----- Original Message -----

> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> To: "gluster-users" <gluster-users at gluster.org>
> Sent: Sunday, October 25, 2015 11:59:17 AM
> Subject: [Gluster-users] Shard Volume testing (3.7.5)

> On 18 October 2015 at 00:17, Vijay Bellur < vbellur at redhat.com > wrote:

> > Krutika has been working on several performance improvements for sharding
> > and
> > the results have been encouraging for virtual machine workloads.
> 

> > Testing feedback would be very welcome!
> 

Hi Lindsay, 

Thank you for trying out sharding and for your feedback. :) Please find my comments inline. 

> I've managed to setup a replica 3 3.7.5 shard test volume, hosted using
> virtualised debian 8.2 servers, so performance is a bit crap :)

> 3 Nodes, gn1, hn2 & gn3
> Each node has:
> - 1GB RAM
> - 1GB Ethernet
> - 512 GB disk hosted on a ZFS External USB Drive :)

> - Datastore is shared out via NFS to the main cluster for running a VM
> - I have the datastore mounted using glusterfs inside each test node so I can
> examine the data directly.

> I've got two VM's running off it, one a 65GB (25GB sparse) Windows 7. I'be
> running bench marks and testing node failures by killing the cluster
> processes and killing actual nodes.

> - Heal speed is immensely faster, a matter of minutes rather than hours.
> - Read performance is quite good

Good to hear. :) 

> - Write performance is atrocious, but given the limited resources not
> unexpected.

With block size as low as 4MB, to the replicate module, these individual shards appear as large number of small(er) files, effectively turning it into some form of a small-file workload. 
There is an enhancement being worked on in AFR by Pranith, which attempts to improve write performance which will especially be useful when used with sharding. That should make this problem go away. 

> - I'll be upgrading my main cluster to jessie soon and will be able to test
> with real hardware and bonded connections, plus using gfapi direct. Then
> I'll be able to do real benchmarks.

> One Bug:
> After heals completed I shut down the VM's and run a MD5SUM on the VM image
> (via glusterfs) on each nodes. They all matched except for one time on gn3.
> Once I unmounted/remounted the datastore on gn3 the md5sum matched.

This could possibly be the effect of a caching bug reported at https://bugzilla.redhat.com/show_bug.cgi?id=1272986 . The fix is out for review and I'm confident that it will make it into 3.7.6. 

> One Oddity:
> gluster volume heals datastore info *always* shows a split brain on the
> directory, but it always heals without intervention. Dunno if this is normal
> on not.

Which directory would this be? Do you have the glustershd logs? 

> Questions:
> - I'd be interested to know how the shard are organsied and accessed - it
> looks like 1000's of 4mb files in the .shard directory, I'm concerned access
> times will go in the toilet once many large VM images are stored on the
> volume.

Here is some documentation on sharding: https://gluster.readthedocs.org/en/release-3.7.0/Features/shard/ . Let me know if you have more questions, and I will be happy to answer them. 
The problems we foresaw with too many 4MB shards is that 
i. entry self-heal under /.shard could result in complete crawl of the /.shard directory during heal, or 
ii. a disk replacement could involve lot many files needing to be created and healed to the sink brick, 
both of which would result in slower "entry" heal and rather high resource consumption from self-heal daemon. 
Fortunately, with the introduction of more granular changelogs in replicate module to identify exactly what files under a given directory need to be healed to the sink brick, these problems should go away. 
In fact this enhancement is being worked upon as we speak and is targeted to be out by 3.8. Here is some doc: http://review.gluster.org/#/c/12257/1/in_progress/afr-self-heal-improvements.md (read section "Granular entry self-heals"). 

> - Is it worth experimenting with different shard sizes?

Sure! You could use 'gluster volume set <VOL> features.shard-block-size <size>' to reconfigure the shard size. The new size will be used to shard those files/images/vdisks that are created _after_ the block size was reconfigured. 

> - Anything you'd like me to test?

Yes. So Paul Cuzner and Satheesaran who have been testing sharding here have reported better write performance with 512M shards. I'd be interested to know what you feel about performance with relatively larger shards (think 512M). 

-Krutika 

> Thanks,

> --
> Lindsay

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151026/28afb359/attachment.html>


More information about the Gluster-users mailing list