<div dir="ltr">Clearly I am doing something wrong<div><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">pve01:~# gluster vol info
</span><br> <br>Volume Name: VMS
<br>Type: Distributed-Replicate
<br>Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
<br>Status: Started
<br>Snapshot Count: 0
<br>Number of Bricks: 3 x 2 = 6
<br>Transport-type: tcp
<br>Bricks:
<br>Brick1: gluster1:/disco2TB-0/vms
<br>Brick2: gluster2:/disco2TB-0/vms
<br>Brick3: gluster1:/disco1TB-0/vms
<br>Brick4: gluster2:/disco1TB-0/vms
<br>Brick5: gluster1:/disco1TB-1/vms
<br>Brick6: gluster2:/disco1TB-1/vms
<br>Options Reconfigured:
<br>cluster.self-heal-daemon: off
<br>cluster.entry-self-heal: off
<br>cluster.metadata-self-heal: off
<br>cluster.data-self-heal: off
<br>cluster.granular-entry-heal: on
<br>storage.fips-mode-rchecksum: on
<br>transport.address-family: inet
<br>performance.client-io-threads: off
<br>pve01:~# gluster vol add-brick VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:<br>/disco1TB-1/vms arbiter:/arbiter1
<br>volume add-brick: failed: Operation failed
<br>pve01:~# gluster vol add-brick VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:<br>/disco1TB-1/vms arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 arbiter:/arbiter4
<br>volume add-brick: failed: Operation failed<br>
<br></span><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 5 de nov. de 2024 às 13:39, Andreas Schwibbe <<a href="mailto:a.schwibbe@gmx.net">a.schwibbe@gmx.net</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div>If you create a volume with replica 2 arbiter 1</div><div><br></div><div>you create 2 data bricks that are mirrored (makes 2 file copies)</div><div>+</div><div>you create 1 arbiter that holds metadata of all files on these bricks.</div><div><br></div><div>You "can" create all on the same server, but this makes no sense, because when the server goes down, no files on these disks are accessible anymore,<br>hence why bestpractice is to spread out over 3 servers, so when one server (or disk) goes down, you will still have 1 file copy and 1 arbiter with metadata online.<br>Which is also very handy when the down server comes up again, because then you prevent splitbrain as you have matching file copy + metadata showing which version of each file is newest, thus self-heal can jump in to get you back to 2 file copies.<br><br>when you want to add further bricks, you must add pairs i.e.<br>you will add again 2 bricks 1 arbiter and these bricks and arbiter belong together and share the same files and metadata.<br><br>Hth.<br>A.</div><div><br></div><div><br></div><div>Am Dienstag, dem 05.11.2024 um 13:28 -0300 schrieb Gilberto Ferreira:</div><blockquote type="cite" style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div dir="ltr">Ok.<div>I got confused here!<br>For each brick I will need one arbiter brick, in a different partition/folder?</div><div>And what if in some point in the future I decide to add a new brick in the main servers?</div><div>Do I need to provide another partition/folder in the arbiter and then adjust the arbiter brick counter?</div><div><br clear="all"><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><div><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 5 de nov. de 2024 às 13:22, Andreas Schwibbe <<a href="mailto:a.schwibbe@gmx.net" target="_blank">a.schwibbe@gmx.net</a>> escreveu:<br></div><blockquote type="cite" style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div><div>Your add-brick command adds 2 bricks 1 arbiter (even though you name them all arbiter!)<br><br>The sequence is important:</div><div><br></div><div>gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0 arbiter1:/arb1<br><br>adds two data bricks and a corresponding arbiter from 3 different servers and 3 different disks, <br>thus you can loose any one server OR any one disk and stay up and consistent.<br><br>adding more bricks to the volume you can follow the pattern.<br><br>A.</div><div><br></div><div>Am Dienstag, dem 05.11.2024 um 12:51 -0300 schrieb Gilberto Ferreira:</div><blockquote type="cite" style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div dir="ltr">Hi there.<div><br></div><div>In previous emails, I comment with you guys, about 2 node gluster server, where the bricks lay down in different size and folders in the same server, like</div><div><br></div><div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms</span><br></span></div><div><br></div></div><div>So I went ahead and installed a Debian 12 and installed the same gluster version that the other servers, which is now 11.1 or something like that.</div><div>In this new server, I have a small disk like 480G in size.</div><div>And I created 3 partitions formatted with XFS using imaxpct=75, as suggested in previous emails.</div><div><br></div><div>And than in the gluster nodes, I tried to add the brick</div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster vol add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1/arbiter1 arbiter:/arbiter2/arbiter2 arbiter:/arbiter3/arbiter3</span><br></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)"><br></span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">But to my surprise (or not!) I got this message:</span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co</span><br>nfiguration. Use 'force' at the end of the command if you want to override this behavior. <br><br></span></div><div><span style="font-family:monospace">Why is that?</span></div><div><span style="font-family:monospace"><br></span></div><div> </div><div><br></div><div><br></div><div><br></div><div><br><div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div></div><div>________<br></div><div><br></div><div><br></div><div><br></div><div>Community Meeting Calendar:<br></div><div><br></div><div>Schedule -<br></div><div>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div><div>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br></div><div>Gluster-users mailing list<br></div><div><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br></div><div><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote><div><br></div><div><span></span></div></div></blockquote></div></blockquote><div><br></div><div><span></span></div></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>