<div dir="ltr">Still getting error<div>pve01:~# gluster vol info<br> <br>Volume Name: VMS<br>Type: Distributed-Replicate<br>Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x 2 = 6<br>Transport-type: tcp<br>Bricks:<br>Brick1: gluster1:/disco2TB-0/vms<br>Brick2: gluster2:/disco2TB-0/vms<br>Brick3: gluster1:/disco1TB-0/vms<br>Brick4: gluster2:/disco1TB-0/vms<br>Brick5: gluster1:/disco1TB-1/vms<br>Brick6: gluster2:/disco1TB-1/vms<br>Options Reconfigured:<br>cluster.self-heal-daemon: off<br>cluster.entry-self-heal: off<br>cluster.metadata-self-heal: off<br>cluster.data-self-heal: off<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>performance.client-io-threads: off<br>pve01:~# gluster volume add-brick VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms arbiter:/arbiter1 force<br>volume add-brick: failed: Brick: gluster1:/disco2TB-0/vms not available. Brick may be containing or be contained by an existing brick.<br>pve01:~# <br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 5 de nov. de 2024 às 14:33, Gilberto Ferreira <<a href="mailto:gilberto.nunes32@gmail.com">gilberto.nunes32@gmail.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">Yes but I want to add.<br>
Is it the same logic?</p>
<div>---<br>Gilberto Nunes Ferreira <br>+55 (47) 99676-7530<br>Proxmox VE<br>VinChin Backup & Restore </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda@kadalu.tech> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><u></u><div><div style="font-family:Verdana,Arial,Helvetica,sans-serif;font-size:10pt"><div>Hello Gilberto,<br></div><div><br></div><div>You can create a Arbiter volume using three bricks. Two of them will be data bricks and one will be Arbiter brick.<br></div><div><br></div><div>gluster volume create VMS replica 3 arbiter 1 <span style="color:rgb(0,0,0);font-family:monospace;font-size:13px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">gluster1:/disco2TB-0/vms <span style="color:rgb(0,0,0);font-family:monospace;font-size:13px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">gluster2:/disco2TB-0/vms <span style="color:rgb(0,0,0);font-family:monospace;font-size:13px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">arbiter:/arbiter1</span></span></span><br></div><div><br></div><div><span style="color:rgb(0,0,0);font-family:monospace;font-size:13px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline"><span style="color:rgb(0,0,0);font-family:monospace;font-size:13px;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">To make this Volume as distributed Arbiter, add more bricks (multiple of 3, two data bricks and one arbiter brick) similar to above.</span></span><br></div><div><br></div><div>--</div><div id="m_183633454189882072m_-5410720474341123955Zm-_Id_-Sgn"><div>Aravinda<br></div></div><div style="border-top:1px solid rgb(204,204,204);height:0px;margin-top:10px;margin-bottom:10px;line-height:0px"><br></div><div><div><br></div><div id="m_183633454189882072m_-5410720474341123955Zm-_Id_-Sgn1">---- On Tue, 05 Nov 2024 22:24:38 +0530 <b>Gilberto Ferreira <<a href="mailto:gilberto.nunes32@gmail.com" rel="noreferrer" target="_blank">gilberto.nunes32@gmail.com</a>></b> wrote ---<br></div><div><br></div><blockquote id="m_183633454189882072m_-5410720474341123955blockquote_zmail" style="margin:0px"><div><div dir="ltr"><div>Clearly I am doing something wrong<br></div><div><br></div><div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">pve01:~# gluster vol info </span><br> <br>Volume Name: VMS <br>Type: Distributed-Replicate <br>Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf <br>Status: Started <br>Snapshot Count: 0 <br>Number of Bricks: 3 x 2 = 6 <br>Transport-type: tcp <br>Bricks: <br>Brick1: gluster1:/disco2TB-0/vms <br>Brick2: gluster2:/disco2TB-0/vms <br>Brick3: gluster1:/disco1TB-0/vms <br>Brick4: gluster2:/disco1TB-0/vms <br>Brick5: gluster1:/disco1TB-1/vms <br>Brick6: gluster2:/disco1TB-1/vms <br>Options Reconfigured: <br>cluster.self-heal-daemon: off <br>cluster.entry-self-heal: off <br>cluster.metadata-self-heal: off <br>cluster.data-self-heal: off <br>cluster.granular-entry-heal: on <br>storage.fips-mode-rchecksum: on <br>transport.address-family: inet <br>performance.client-io-threads: off <br>pve01:~# gluster vol add-brick VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:<br>/disco1TB-1/vms arbiter:/arbiter1 <br>volume add-brick: failed: Operation failed <br>pve01:~# gluster vol add-brick VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:<br>/disco1TB-1/vms arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 arbiter:/arbiter4 <br>volume add-brick: failed: Operation failed<br> <br></span></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---<br></div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira<br></div></div><div><br></div><div><p style="margin:0px"><span style="font-size:12.8px;margin:0px"></span><br></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p></div></div><div><br></div></div></div></div></div></div></div></div><div><br></div></div></div><div><br></div><div><div dir="ltr">Em ter., 5 de nov. de 2024 às 13:39, Andreas Schwibbe <<a href="mailto:a.schwibbe@gmx.net" rel="noreferrer" target="_blank">a.schwibbe@gmx.net</a>> escreveu:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div>If you create a volume with replica 2 arbiter 1<br></div><div><br></div><div>you create 2 data bricks that are mirrored (makes 2 file copies)<br></div><div>+<br></div><div>you create 1 arbiter that holds metadata of all files on these bricks.<br></div><div><br></div><div>You "can" create all on the same server, but this makes no sense, because when the server goes down, no files on these disks are accessible anymore,<br>hence why bestpractice is to spread out over 3 servers, so when one server (or disk) goes down, you will still have 1 file copy and 1 arbiter with metadata online.<br>Which is also very handy when the down server comes up again, because then you prevent splitbrain as you have matching file copy + metadata showing which version of each file is newest, thus self-heal can jump in to get you back to 2 file copies.<br><br>when you want to add further bricks, you must add pairs i.e.<br>you will add again 2 bricks 1 arbiter and these bricks and arbiter belong together and share the same files and metadata.<br><br>Hth.<br>A.</div><div><br></div><div><br></div><div>Am Dienstag, dem 05.11.2024 um 13:28 -0300 schrieb Gilberto Ferreira:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div dir="ltr"><div>Ok.<br></div><div>I got confused here!<br>For each brick I will need one arbiter brick, in a different partition/folder?</div><div>And what if in some point in the future I decide to add a new brick in the main servers?<br></div><div>Do I need to provide another partition/folder in the arbiter and then adjust the arbiter brick counter?<br></div><div><div><br></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---<br></div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira<br></div></div><div><br></div><div><p style="margin:0px"><span style="font-size:12.8px;margin:0px"></span><br></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p></div></div><div><br></div></div></div></div></div></div></div></div><div><br></div></div></div><div><br></div><div><div dir="ltr">Em ter., 5 de nov. de 2024 às 13:22, Andreas Schwibbe <<a href="mailto:a.schwibbe@gmx.net" rel="noreferrer" target="_blank">a.schwibbe@gmx.net</a>> escreveu:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div><div>Your add-brick command adds 2 bricks 1 arbiter (even though you name them all arbiter!)<br><br>The sequence is important:</div><div><br></div><div>gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0 arbiter1:/arb1<br><br>adds two data bricks and a corresponding arbiter from 3 different servers and 3 different disks, <br>thus you can loose any one server OR any one disk and stay up and consistent.<br><br>adding more bricks to the volume you can follow the pattern.<br><br>A.</div><div><br></div><div>Am Dienstag, dem 05.11.2024 um 12:51 -0300 schrieb Gilberto Ferreira:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:2px solid rgb(114,159,207);padding-left:1ex"><div dir="ltr"><div>Hi there.<br></div><div><br></div><div>In previous emails, I comment with you guys, about 2 node gluster server, where the bricks lay down in different size and folders in the same server, like<br></div><div><br></div><div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms</span><br></span></div><div><br></div></div><div>So I went ahead and installed a Debian 12 and installed the same gluster version that the other servers, which is now 11.1 or something like that.<br></div><div>In this new server, I have a small disk like 480G in size.<br></div><div>And I created 3 partitions formatted with XFS using imaxpct=75, as suggested in previous emails.<br></div><div><br></div><div>And than in the gluster nodes, I tried to add the brick<br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">gluster vol add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1/arbiter1 arbiter:/arbiter2/arbiter2 arbiter:/arbiter3/arbiter3</span><br></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)"><br></span></span></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">But to my surprise (or not!) I got this message:</span></span><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co</span><br>nfiguration. Use 'force' at the end of the command if you want to override this behavior. <br><br></span></div><div><span style="font-family:monospace">Why is that?</span><br></div><div><span style="font-family:monospace"><br></span></div><div> <br></div><div><br></div><div><br></div><div><br></div><div><div><br></div><div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---<br></div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira<br></div></div><div><br></div><div><p style="margin:0px"><span style="font-size:12.8px;margin:0px"></span><br></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br></span></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div></div><div>________<br></div><div><br></div><div><br></div><div><br></div><div>Community Meeting Calendar:<br></div><div><br></div><div>Schedule -<br></div><div>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div><div>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br></div><div>Gluster-users mailing list<br></div><div><a href="mailto:Gluster-users@gluster.org" rel="noreferrer" target="_blank">Gluster-users@gluster.org</a><br></div><div><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote><div><br></div><div><span></span><br></div></div></blockquote></div></blockquote><div><br></div><div><span></span><br></div></div><div>________<br></div><div> <br></div><div> <br></div><div> <br></div><div> Community Meeting Calendar:<br></div><div> <br></div><div> Schedule -<br></div><div> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div><div> Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br></div><div> Gluster-users mailing list<br></div><div> <a href="mailto:Gluster-users@gluster.org" rel="noreferrer" target="_blank">Gluster-users@gluster.org</a><br></div><div> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote></div><div>________<br></div><div> <br></div><div> <br></div><div> <br></div><div>Community Meeting Calendar: <br></div><div> <br></div><div>Schedule - <br></div><div>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC <br></div><div>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a> <br></div><div>Gluster-users mailing list <br></div><div><a href="mailto:Gluster-users@gluster.org" rel="noreferrer" target="_blank">Gluster-users@gluster.org</a> <br></div><div><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a> <br></div></div></blockquote></div><div><br></div></div><br></div></blockquote></div>
</blockquote></div>