[Gluster-users] GlusterFS as virtual machine storage

Ivan Rossi rouge2507 at gmail.com
Wed Aug 30 15:07:44 UTC 2017


There has ben a bug associated to sharding that led to VM corruption that
has been around for a long time (difficult to reproduce I understood). I
have not seen reports on that for some time after the last fix, so
hopefully now VM hosting is stable.

2017-08-30 3:57 GMT+02:00 Everton Brogliatto <brogliatto at gmail.com>:

> Ciao Gionatan,
>
> I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide
> storage for oVirt 4.x and I have had no major issues so far.
> I have done online upgrades a couple of times, power losses, maintenance,
> etc with no issues. Overall, it is very resilient.
>
> Important thing to keep in mind is your network, I run the Gluster nodes
> on a redundant network using bonding mode 1 and I have performed
> maintenance on my switches, bringing one of them off-line at a time without
> causing problems in my Gluster setup or in my running VMs.
> Gluster recommendation is to enable jumbo frames across the
> subnet/servers/switches you use for Gluster operations. Your switches must
> support MTU 9000 + 208 at least.
>
> There were two occasions where I purposely caused a split brain situation
> and I was able to heal the files manually.
>
> Volume performance tuning can make a significant difference in Gluster. As
> others have mentioned previously, sharding is recommended when running VMs
> as it will split big files in smaller pieces, making it easier for the
> healing to occur.
> When you enable sharding, the default sharding block size is 4MB which
> will significantly reduce your writing speeds. oVirt recommends the shard
> block size to be 512MB.
> The volume options you are looking here are:
> features.shard on
> features.shard-block-size 512MB
>
> I had an experimental setup in replica 2 using an older version of Gluster
> few years ago and it was unstable, corrupt data and crashed many times. Do
> not use replica 2. As others have already said, minimum is replica 2+1
> arbiter.
>
> If you have any questions that I perhaps can help with, drop me an email.
>
>
> Regards,
> Everton Brogliatto
>
>
> On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <g.danti at assyoma.it>
> wrote:
>
>> Il 26-08-2017 07:38 Gionatan Danti ha scritto:
>>
>>> I'll surely give a look at the documentation. I have the "bad" habit
>>> of not putting into production anything I know how to repair/cope
>>> with.
>>>
>>> Thanks.
>>>
>>
>> Mmmm, this should read as:
>>
>> "I have the "bad" habit of not putting into production anything I do NOT
>> know how to repair/cope with"
>>
>> Really :D
>>
>>
>> Thanks.
>>
>> --
>> Danti Gionatan
>> Supporto Tecnico
>> Assyoma S.r.l. - www.assyoma.it
>> email: g.danti at assyoma.it - info at assyoma.it
>> GPG public key ID: FF5F32A8
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170830/cc53645c/attachment.html>


More information about the Gluster-users mailing list