<div dir="ltr"><div>After force the add-brick</div><div><br></div><div>gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 force<br>volume add-brick: success<br>pve01:~# gluster volume info<br> <br>Volume Name: VMS<br>Type: Distributed-Replicate<br>Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x (2 + 1) = 9<br>Transport-type: tcp<br>Bricks:<br>Brick1: gluster1:/disco2TB-0/vms<br>Brick2: gluster2:/disco2TB-0/vms<br>Brick3: arbiter:/arbiter1 (arbiter)<br>Brick4: gluster1:/disco1TB-0/vms<br>Brick5: gluster2:/disco1TB-0/vms<br>Brick6: arbiter:/arbiter2 (arbiter)<br>Brick7: gluster1:/disco1TB-1/vms<br>Brick8: gluster2:/disco1TB-1/vms<br>Brick9: arbiter:/arbiter3 (arbiter)<br>Options Reconfigured:<br>cluster.self-heal-daemon: off<br>cluster.entry-self-heal: off<br>cluster.metadata-self-heal: off<br>cluster.data-self-heal: off<br>cluster.granular-entry-heal: on<br>storage.fips-mode-rchecksum: on<br>transport.address-family: inet<br>performance.client-io-threads: off<br>pve01:~# </div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br></div><div><br></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br></p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em sex., 8 de nov. de 2024 às 06:38, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">What's the volume structure right now?<br><br><div id="m_-5388910257960545199ymail_android_signature">Best Regards,<br>Strahil Nikolov</div> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira</div><div><<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> <div id="m_-5388910257960545199yiv0687256608"><div><div dir="ltr"><div>So I went ahead and do the force (is with you!)</div><div><br clear="none"></div><div><font face="arial, sans-serif"><span style="color:rgb(0,0,0)">gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
</span><br clear="none">volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co<br clear="none">nfiguration. Use 'force' at the end of the command if you want to override this behavior. <br clear="none">pve01:~# gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 force</font></div><div><font face="arial, sans-serif">volume add-brick: success</font></div><div><font face="arial, sans-serif"><br clear="none"></font></div><div><font face="arial, sans-serif">But I don't know if this is the right thing to do.</font></div><div><span style="font-family:monospace"><br clear="none">
<br clear="none"></span></div><div><span style="font-family:monospace"><br clear="none"></span></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br clear="none"></div><div><br clear="none"></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br clear="none"></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div><br clear="none"></div><br clear="none"><div id="m_-5388910257960545199yiv0687256608yqt26679"><div><div dir="ltr">Em qua., 6 de nov. de 2024 às 13:10, Gilberto Ferreira <<a shape="rect" href="mailto:gilberto.nunes32@gmail.com" rel="noreferrer noopener" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><font face="arial, sans-serif">But if I change replica 2 arbiter 1 to replica 3 arbiter 1</font></div><div><font face="arial, sans-serif"><br clear="none"></font></div><div><span style="color:rgb(0,0,0)"><font face="arial, sans-serif">gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3</font></span></div><div><font face="arial, sans-serif"><span style="color:rgb(0,0,0)"></span><font color="#000000">I got thir error:</font></font></div><div><font face="arial, sans-serif"><font color="#000000"><br clear="none"></font>volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior.</font></div><div><font face="arial, sans-serif"><br clear="none"></font></div><div><font face="arial, sans-serif">Should I maybe add the force and live with this?</font></div><div><span style="font-family:monospace"><br clear="none">
<br clear="none"></span></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br clear="none"></div><div><br clear="none"></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br clear="none"></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div><br clear="none"></div><br clear="none"><div><div dir="ltr">Em qua., 6 de nov. de 2024 às 12:53, Gilberto Ferreira <<a shape="rect" href="mailto:gilberto.nunes32@gmail.com" rel="noreferrer noopener" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Ok.</div><div>I have a 3rd host with Debian 12 installed and Gluster v11. The name of the host is arbiter!</div><div><br clear="none"></div><div>I already add this host into the pool:</div><div><font face="arial, sans-serif"><span style="color:rgb(0,0,0)">arbiter:~# gluster pool list
</span><br clear="none">UUID Hostname State
<br clear="none">0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected <br clear="none">99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected <br clear="none">4718ead7-aebd-4b8b-a401-f9e8b0acfeb1 localhost Connected</font></div><div><font face="arial, sans-serif"><br clear="none"></font></div><div><font face="arial, sans-serif">But when I do this:</font></div><div><span style="color:rgb(0,0,0)"><font face="arial, sans-serif">pve01:~# gluster volume add-brick VMS replica 2 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3</font></span></div><div><font color="#000000" face="arial, sans-serif">I got this error:<br clear="none"><br clear="none"></font><font face="arial, sans-serif"><span style="color:rgb(0,0,0)">For arbiter configuration, replica count must be 3 and arbiter count must be 1. The 3rd brick of the replica will be the arbiter
</span><br clear="none">
<br clear="none">Usage:
<br clear="none">volume add-brick <VOLNAME> [<replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force]</font></div><div><font face="arial, sans-serif"><br clear="none"></font></div><div><font face="arial, sans-serif">gluster vol info</font></div><div><font face="arial, sans-serif"><span style="color:rgb(0,0,0)">pve01:~# gluster vol info
</span><br clear="none"> <br clear="none">Volume Name: VMS
<br clear="none">Type: Distributed-Replicate
<br clear="none">Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
<br clear="none">Status: Started
<br clear="none">Snapshot Count: 0
<br clear="none">Number of Bricks: 3 x 2 = 6
<br clear="none">Transport-type: tcp
<br clear="none">Bricks:
<br clear="none">Brick1: gluster1:/disco2TB-0/vms
<br clear="none">Brick2: gluster2:/disco2TB-0/vms
<br clear="none">Brick3: gluster1:/disco1TB-0/vms
<br clear="none">Brick4: gluster2:/disco1TB-0/vms
<br clear="none">Brick5: gluster1:/disco1TB-1/vms
<br clear="none">Brick6: gluster2:/disco1TB-1/vms
<br clear="none">Options Reconfigured:
<br clear="none">performance.client-io-threads: off
<br clear="none">transport.address-family: inet
<br clear="none">storage.fips-mode-rchecksum: on
<br clear="none">cluster.granular-entry-heal: on
<br clear="none">cluster.data-self-heal: off
<br clear="none">cluster.metadata-self-heal: off
<br clear="none">cluster.entry-self-heal: off
<br clear="none">cluster.self-heal-daemon: off<br clear="none"></font>
<br clear="none">
What am I doing wrong?</div><div><font color="#000000" face="arial, sans-serif"><br clear="none"></font>
<br clear="none">
<br clear="none"></div><div><span style="font-family:monospace"><br clear="none"></span></div><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><br clear="none"></div><div><img src="https://ci3.googleusercontent.com/mail-sig/AIorK4yHG-wUIelCTtozPBQoS83ZcRg8ukTeTVlsRkm1MmU-3Xy2S-2myWu_idYMDhBGeBoDo33pV0UOwMIl" id="m_-5388910257960545199ymail_ctr_id_-59459-11"><br clear="none"></div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br clear="none"></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p><p style="font-size:12.8px;margin:0px"><br clear="none"></p></div></div><div><br clear="none"></div></div></div></div></div></div></div></div><br clear="none"></div><br clear="none"><div><div dir="ltr">Em qua., 6 de nov. de 2024 às 11:32, Strahil Nikolov <<a shape="rect" href="mailto:hunter86_bg@yahoo.com" rel="noreferrer noopener" target="_blank">hunter86_bg@yahoo.com</a>> escreveu:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Right now you have 3 "sets" of replica 2 on 2 hosts.<div>In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.</div><div><br clear="none"></div><div>If you do have a 3rd host, I think the command would be:</div><div>gluster volume add-brick VOLUME replica 2 arbiter 1 server3:/first/set/arbiter server3:/second/set/arbiter server3:/last/set/arbiter</div><div><br clear="none"></div><div><br clear="none"></div><div>Best Regards,</div><div>Strahil Nikolov<br clear="none"><br clear="none"><div id="m_-5388910257960545199yiv0687256608m_7722182346011597138m_-5253944993768681415m_-5393993328698514470ymail_android_signature">Best Regards,<br clear="none">Strahil Nikolov</div> <br clear="none"> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Tue, Nov 5, 2024 at 21:17, Gilberto Ferreira</div><div><<a shape="rect" href="mailto:gilberto.nunes32@gmail.com" rel="noreferrer noopener" target="_blank">gilberto.nunes32@gmail.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> ________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a shape="rect" href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer noopener" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" href="mailto:Gluster-users@gluster.org" rel="noreferrer noopener" target="_blank">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noopener" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"> </div> </blockquote></div></blockquote></div>
</blockquote></div>
</blockquote></div></div>
</div></div> </div> </blockquote></blockquote></div>