<div dir="ltr"><span style="color:rgb(0,0,0)">Hello Gionatan,</span><div style="color:rgb(0,0,0)"><br></div><div style="color:rgb(0,0,0)"> Using Gluster brick in a RAID configuration might be safer and require less work from Gluster admins but, it is a waste of disk space.</div><div style="color:rgb(0,0,0)">Gluster bricks are replicated "assuming you're creating a distributed-replica volume" so when brick went down, it should be easy to recover it and should not affect the client's IO.</div><div style="color:rgb(0,0,0)">We are using JBOD in all of our Gluster setups, overall, performance is good, and replacing a brick would work "most" of the time without issues.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jun 21, 2020 at 8:43 PM Gionatan Danti <<a href="mailto:g.danti@assyoma.it">g.danti@assyoma.it</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Il 2020-06-21 14:20 Strahil Nikolov ha scritto:<br>
> With every community project , you are in the position of a Betta<br>
> Tester - no matter Fedora, Gluster or CEPH. So far , I had<br>
> issues with upstream projects only diring and immediately after<br>
> patching - but this is properly mitigated with a reasonable<br>
> patching strategy (patch test environment and several months later<br>
> patch prod with the same repos).<br>
> Enterprise Linux breaks (and alot) having 10-times more users and<br>
> use cases, so you cannot expect to start to use Gluster and assume<br>
> that a free peoject won't break at all.<br>
> Our part in this project is to help the devs to create a test case for<br>
> our workload , so regressions will be reduced to minimum.<br>
<br>
Well, this is true, and both devs & community deserve a big thanks for <br>
all the work done.<br>
<br>
> In the past 2 years, we got 2 major issues with VMware VSAN and 1<br>
> major issue with a Enterprise Storage cluster (both solutions are<br>
> quite expensive) - so I always recommend proper testing of your<br>
> software .<br>
<br>
Interesting, I am almost tempted to ask you what issue you had with <br>
vSAN, but this is not the right mailing list ;)<br>
<br>
> From my observations, almost nobody is complaining about Ganesha in<br>
> the mailing list -> 50% are having issues with geo replication,20%<br>
> are having issues with small file performance and the rest have<br>
> issues with very old version of gluster -> v5 or older.<br>
<br>
Mmm, I would swear to have read quite a few posts where the problem was <br>
solved by migrating away from NFS Ganesha. Still, for hyperconverged <br>
setup a problem remains: NFS on loopback/localhost is not 100% supported <br>
(or, at least, RH is not willing to declare it supportable/production <br>
ready [1]). A fuse mount would be the more natural way to access the <br>
underlying data.<br>
<br>
> I can't say that a replace-brick on a 'replica 3' volume is more<br>
> riskier than a rebuild of a raid, but I have noticed that nobody is<br>
> following Red Hat's guide to use either:<br>
> - a Raid6 of 12 Disks (2-3 TB big)<br>
> - a Raid10 of 12 Disks (2-3 TB big)<br>
> - JBOD disks in 'replica 3' mode (i'm not sure about the size RH<br>
> recommends, most probably 2-3 TB)<br>
> So far, I didn' have the opportunity to run on JBODs.<br>
<br>
For the RAID6/10 setup, I found no issues: simply replace the broken <br>
disk without involing Gluster at all. However, this also means facing <br>
the "iops wall" I described earlier for single-brick node. Going <br>
full-Guster with JBODs would be interesting from a performance <br>
standpoint, but this complicate eventual recovery from bad disks.<br>
<br>
Does someone use Gluster in JBOD mode? If so, can you share your <br>
experience?<br>
Thanks.<br>
<br>
[1] <a href="https://access.redhat.com/solutions/22231" rel="noreferrer" target="_blank">https://access.redhat.com/solutions/22231</a> (accound required)<br>
[2] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=489889" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=489889</a> (old, but I can <br>
not find anything newer)<br>
<br>
-- <br>
Danti Gionatan<br>
Supporto Tecnico<br>
Assyoma S.r.l. - <a href="http://www.assyoma.it" rel="noreferrer" target="_blank">www.assyoma.it</a> [1]<br>
email: <a href="mailto:g.danti@assyoma.it" target="_blank">g.danti@assyoma.it</a> - <a href="mailto:info@assyoma.it" target="_blank">info@assyoma.it</a><br>
GPG public key ID: FF5F32A8<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Respectfully<div>Mahdi</div></div></div>