<div dir="auto">The latter one is the one I have been referring to. And it is pretty dangerous Imho</div><div class="gmail_extra"><br><div class="gmail_quote">Il 31/ago/2017 01:19, <<a href="mailto:lemonnierk@ulrar.net" target="_blank">lemonnierk@ulrar.net</a>> ha scritto:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Solved as to 3.7.12. The only bug left is when adding new bricks to<br>
create a new replica set, now sure where we are now on that bug but<br>
that's not a common operation (well, at least for me).<br>
<br>
On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote:<br>
> There has ben a bug associated to sharding that led to VM corruption that<br>
> has been around for a long time (difficult to reproduce I understood). I<br>
> have not seen reports on that for some time after the last fix, so<br>
> hopefully now VM hosting is stable.<br>
><br>
> 2017-08-30 3:57 GMT+02:00 Everton Brogliatto <<a href="mailto:brogliatto@gmail.com">brogliatto@gmail.com</a>>:<br>
><br>
> > Ciao Gionatan,<br>
> ><br>
> > I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide<br>
> > storage for oVirt 4.x and I have had no major issues so far.<br>
> > I have done online upgrades a couple of times, power losses, maintenance,<br>
> > etc with no issues. Overall, it is very resilient.<br>
> ><br>
> > Important thing to keep in mind is your network, I run the Gluster nodes<br>
> > on a redundant network using bonding mode 1 and I have performed<br>
> > maintenance on my switches, bringing one of them off-line at a time without<br>
> > causing problems in my Gluster setup or in my running VMs.<br>
> > Gluster recommendation is to enable jumbo frames across the<br>
> > subnet/servers/switches you use for Gluster operations. Your switches must<br>
> > support MTU 9000 + 208 at least.<br>
> ><br>
> > There were two occasions where I purposely caused a split brain situation<br>
> > and I was able to heal the files manually.<br>
> ><br>
> > Volume performance tuning can make a significant difference in Gluster. As<br>
> > others have mentioned previously, sharding is recommended when running VMs<br>
> > as it will split big files in smaller pieces, making it easier for the<br>
> > healing to occur.<br>
> > When you enable sharding, the default sharding block size is 4MB which<br>
> > will significantly reduce your writing speeds. oVirt recommends the shard<br>
> > block size to be 512MB.<br>
> > The volume options you are looking here are:<br>
> > features.shard on<br>
> > features.shard-block-size 512MB<br>
> ><br>
> > I had an experimental setup in replica 2 using an older version of Gluster<br>
> > few years ago and it was unstable, corrupt data and crashed many times. Do<br>
> > not use replica 2. As others have already said, minimum is replica 2+1<br>
> > arbiter.<br>
> ><br>
> > If you have any questions that I perhaps can help with, drop me an email.<br>
> ><br>
> ><br>
> > Regards,<br>
> > Everton Brogliatto<br>
> ><br>
> ><br>
> > On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <<a href="mailto:g.danti@assyoma.it">g.danti@assyoma.it</a>><br>
> > wrote:<br>
> ><br>
> >> Il <a href="tel:26-08-2017%2007" value="+12608201707">26-08-2017 07</a>:38 Gionatan Danti ha scritto:<br>
> >><br>
> >>> I'll surely give a look at the documentation. I have the "bad" habit<br>
> >>> of not putting into production anything I know how to repair/cope<br>
> >>> with.<br>
> >>><br>
> >>> Thanks.<br>
> >>><br>
> >><br>
> >> Mmmm, this should read as:<br>
> >><br>
> >> "I have the "bad" habit of not putting into production anything I do NOT<br>
> >> know how to repair/cope with"<br>
> >><br>
> >> Really :D<br>
> >><br>
> >><br>
> >> Thanks.<br>
> >><br>
> >> --<br>
> >> Danti Gionatan<br>
> >> Supporto Tecnico<br>
> >> Assyoma S.r.l. - <a href="http://www.assyoma.it" rel="noreferrer" target="_blank">www.assyoma.it</a><br>
> >> email: <a href="mailto:g.danti@assyoma.it">g.danti@assyoma.it</a> - <a href="mailto:info@assyoma.it">info@assyoma.it</a><br>
> >> GPG public key ID: FF5F32A8<br>
> >> ______________________________<wbr>_________________<br>
> >> Gluster-users mailing list<br>
> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> >><br>
> ><br>
> ><br>
> > ______________________________<wbr>_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> ><br>
<br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
<br>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div></div>