<div dir="ltr">There has ben a bug associated to sharding that led to VM corruption that has been around for a long time (difficult to reproduce I understood). I have not seen reports on that for some time after the last fix, so hopefully now VM hosting is stable.</div><div class="gmail_extra"><br><div class="gmail_quote">2017-08-30 3:57 GMT+02:00 Everton Brogliatto <span dir="ltr"><<a href="mailto:brogliatto@gmail.com" target="_blank">brogliatto@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Ciao Gionatan,<br><br></div>I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide storage for oVirt 4.x and I have had no major issues so far.<br></div>I have done online upgrades a couple of times, power losses, maintenance, etc with no issues. Overall, it is very resilient.</div><div><br></div><div>Important thing to keep in mind is your network, I run the Gluster nodes on a redundant network using bonding mode 1 and I have performed maintenance on my switches, bringing one of them off-line at a time without causing problems in my Gluster setup or in my running VMs.</div><div>Gluster recommendation is to enable jumbo frames across the subnet/servers/switches you use for Gluster operations. Your switches must support MTU 9000 + 208 at least.<br></div><div><br></div><div>There were two occasions where I purposely caused a split brain situation and I was able to heal the files manually.</div><div><br></div><div>Volume performance tuning can make a significant difference in Gluster. As others have mentioned previously, sharding is recommended when running VMs as it will split big files in smaller pieces, making it easier for the healing to occur.</div><div>When you enable sharding, the default sharding block size is 4MB which will significantly reduce your writing speeds. oVirt recommends the shard block size to be 512MB.</div><div>The volume options you are looking here are:</div><div><div style="outline-style:none"><div class="m_-6803224320242864211gmail-" style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap" id="m_-6803224320242864211gmail-SubTabVolumeParameterView_table_content_col0_row10">features.shard on</div><div class="m_-6803224320242864211gmail-" style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap"><div style="outline-style:none"><div class="m_-6803224320242864211gmail-" style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap" id="m_-6803224320242864211gmail-SubTabVolumeParameterView_table_content_col0_row11">features.shard-block-size 512MB</div><div class="m_-6803224320242864211gmail-" style="overflow:hidden;text-overflow:ellipsis;white-space:nowrap"><br></div></div></div></div></div><div><div><div><div>I had an experimental setup in replica 2 using an older version of Gluster few years ago and it was unstable, corrupt data and crashed many times. Do not use replica 2. As others have already said, minimum is replica 2+1 arbiter.<br></div><div></div><div><br></div><div>If you have any questions that I perhaps can help with, drop me an email.</div><div><br></div><div><br></div><div>Regards,</div><div>Everton Brogliatto</div><div><br></div></div></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <span dir="ltr"><<a href="mailto:g.danti@assyoma.it" target="_blank">g.danti@assyoma.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>Il 26-08-2017 07:38 Gionatan Danti ha scritto:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I'll surely give a look at the documentation. I have the "bad" habit<br>
of not putting into production anything I know how to repair/cope<br>
with.<br>
<br>
Thanks.<br>
</blockquote>
<br></span>
Mmmm, this should read as:<br>
<br>
"I have the "bad" habit of not putting into production anything I do NOT know how to repair/cope with"<br>
<br>
Really :D<div class="m_-6803224320242864211HOEnZb"><div class="m_-6803224320242864211h5"><br>
<br>
Thanks.<br>
<br>
-- <br>
Danti Gionatan<br>
Supporto Tecnico<br>
Assyoma S.r.l. - <a href="http://www.assyoma.it" rel="noreferrer" target="_blank">www.assyoma.it</a><br>
email: <a href="mailto:g.danti@assyoma.it" target="_blank">g.danti@assyoma.it</a> - <a href="mailto:info@assyoma.it" target="_blank">info@assyoma.it</a><br>
GPG public key ID: FF5F32A8<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>