[Gluster-devel] [Gluster-users] BoF - Gluster for VM store use case

Ben Turner bturner at redhat.com
Wed Nov 1 00:36:50 UTC 2017


----- Original Message -----
> From: "Sahina Bose" <sabose at redhat.com>
> To: gluster-users at gluster.org
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Tuesday, October 31, 2017 11:46:57 AM
> Subject: [Gluster-users] BoF - Gluster for VM store use case
> 
> During Gluster Summit, we discussed gluster volumes as storage for VM images
> - feedback on the usecase and upcoming features that may benefit this
> usecase.
> 
> Some of the points discussed
> 
> * Need to ensure there are no issues when expanding a gluster volume when
> sharding is turned on.
> * Throttling feature for self-heal, rebalance process could be useful for
> this usecase
> * Erasure coded volumes with sharding - seen as a good fit for VM disk
> storage

I am working on this with a customer, we have been able to do 400-500 MB / sec writes!  Normally things max out at ~150-250.  The trick is to use multiple files, create the lvm stack and use native LVM striping.  We have found that 4-6 files seems to give the best perf on our setup.  I don't think we are using sharding on the EC vols, just multiple files and LVM striping.  Sharding may be able to avoid the LVM striping, but I bet dollars to doughnuts you won't see this level of perf :)  I am working on a blog post for RHHI and RHEV + RHS performance where I am able to in some cases get 2x+ the performance out of VMs / VM storage.  I'd be happy to share my data / findings.

> * Performance related
> ** accessing qemu images using gfapi driver does not perform as well as fuse
> access. Need to understand why.

+1 I have some ideas here that I have came up with in my research.  Happy to share these as well.

> ** Using zfs with cache or lvmcache for xfs filesystem is seen to improve
> performance

I have done some interesting stuff with customers here too, nothing with VMs iirc it was more for backing up bricks without geo-rep(was too slow for them).

-b

> 
> If you have any further inputs on this topic, please add to thread.
> 
> thanks!
> sahina
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-devel mailing list