<html><head></head><body><div>Gilberto,<br><br>this totally depends on your setup.<br><br>With replica 2 you always have 2 copies of the same file.<br>So when you add bricks to your volume you'll want to add Server1/disco1TB-0 and Server2/disco1TB-0 as a pair.<br>Meaning that each file goes to 1 server to 1 disk.<br>Thus your system can fail each 1 disk of any pair OR 1 server and still be up.<br><br>However I recommend not to use replica 2 as you'll get into problems with split-brain when 1 server is down.<br>When it is coming back up, you might have 2 versions of the same file and you need a strategy to figure out which one of the two copies is the actual one.<br>You can however set the volume to read-only if 1 server is down, then you cannot get any splitbrains, but this comes maybe with downtime depending on your usecase.<br><br>Hence why you can use at least replica 2 + 1 arbiter<br>Arbiter will hold metadata copies of each file (so the hardware requirement is pretty low for this server and also doesn't need huge disks) making it easy to find the valid filecopy and heal the invalid one. (once had a NUC as arbiter, running totally fine) [when using arbiter, be sure to create xfs with imaxpct=75 <span style="font-size: 14.666667px;">on arbiter </span>as the bricks will hold metadata only not files]</div><div><br>If you've got enough resources for 3 servers, replica 3 is best.</div><div><br></div><div>When you do </div><div>gluster v status</div><div>and you have replica 2</div><div>then the first two rows are a pair</div><div>if you have set replica 3</div><div>then the first three rows are paired and will hold copies of the same file.</div><div><br>Cheers,</div><div>A.</div><div><br></div><div>Am Samstag, dem 19.10.2024 um 12:25 -0300 schrieb Gilberto Ferreira:</div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div dir="ltr">Hi there.<div>I have 2 servers with this number of disks in each side:</div><div><br></div><div>pve01:~# df | grep disco<br>/dev/sdd 1.0T 9.4G 1015G 1% /disco1TB-0<br>/dev/sdh 1.0T 9.3G 1015G 1% /disco1TB-3<br>/dev/sde 1.0T 9.5G 1015G 1% /disco1TB-1<br>/dev/sdf 1.0T 9.4G 1015G 1% /disco1TB-2<br>/dev/sdg 2.0T 19G 2.0T 1% /disco2TB-1<br>/dev/sdc 2.0T 19G 2.0T 1% /disco2TB-0<br>/dev/sdj 1.0T 9.2G 1015G 1% /disco1TB-4</div><div><br></div><div>I have a Type: Distributed-Replicate gluster</div><div>So my question is: how much disk can be in fail state after losing data or something?</div><div><br></div><div>Thanks in advance</div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div> </div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div><div>________<br></div><div><br></div><div><br></div><div><br></div><div>Community Meeting Calendar:<br></div><div><br></div><div>Schedule -<br></div><div>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div><div>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br></div><div>Gluster-users mailing list<br></div><div><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br></div><div><a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote><div><br></div><div><span></span></div></body></html>