[Gluster-users] Sharding - what next?
lindsay.mathieson at gmail.com
Wed Dec 9 13:18:40 UTC 2015
Hi Guys, sorry for the late reply, my attention tends to be somewhat
sporadic due to work and the large number of rescue dogs/cats I care for :)
On 3/12/2015 8:34 PM, Krutika Dhananjay wrote:
> We would love to hear from you on what you think of the feature and
> where it could be improved.
> Specifically, the following are the questions we are seeking feedback on:
> a) your experience testing sharding with VM store use-case - any bugs
> you ran into, any performance issues, etc
Testing was initially somewhat stressful as I regularly encountered file
corruption. However I don't think that was due to bugs, rather incorrect
settings for the VM usecase. Once I got that sorted out it has been very
stable - I have really stressed failure modes we run into at work -
nodes going down while heavy writes were happening. Live migrations
during heals. gluster software being killed while VM were running on the
host. So far its held up without a hitch.
To that end, one thing I think should be made more obvious is the
settings required for VM Hosting:
They are quite crucial and very easy to miss in the online docs. And
they are only recommended with noo mention that you will corrupt KVM
VM's if you live migrate them between gluster nodes without them set.
Also the virt group is missing from the debian packages.
Setting them does seem to have slowed sequential writes by about 10% but
I need to test that more.
Something related - sharding is useful because it makes heals much more
granular and hence faster. To that end it would be really useful if
there was a heal info variant that gave a overview of the process -
rather than list the shards that are being healed, just a aggregate
$ gluster volume heal datastore1 status
- split brain: 0
It gives one a easy feeling of progress - heals aren't happening faster,
but it would feel that way :)
Also, it would be great if the heal info command could return faster,
sometimes it takes over a minute.
Thanks for the great work,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users