[Gluster-users] Add an arbiter when have multiple bricks at    same server.

Gilberto Ferreira gilberto.nunes32 at gmail.com
Wed Nov 6 16:10:10 UTC 2024


But if I change replica 2 arbiter 1 to replica 3 arbiter 1

gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got thir error:

volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant configuration. Use 'force' at
the end of the command if you want to override this behavior.

Should I maybe add the force and live with this?


---


Gilberto Nunes Ferreira






Em qua., 6 de nov. de 2024 às 12:53, Gilberto Ferreira <
gilberto.nunes32 at gmail.com> escreveu:

> Ok.
> I have a 3rd host with Debian 12 installed and Gluster v11. The name of
> the host is arbiter!
>
> I already add this host into the pool:
> arbiter:~# gluster pool list
> UUID                                    Hostname                State
> 0cbbfc27-3876-400a-ac1d-2d73e72a4bfd    gluster1.home.local     Connected
> 99ed1f1e-7169-4da8-b630-a712a5b71ccd    gluster2                Connected
> 4718ead7-aebd-4b8b-a401-f9e8b0acfeb1    localhost               Connected
>
> But when I do this:
> pve01:~# gluster volume add-brick VMS replica 2 arbiter 1
> arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
> I got this error:
>
> For arbiter configuration, replica count must be 3 and arbiter count must
> be 1. The 3rd brick of the replica will be the arbiter
>
> Usage:
> volume add-brick <VOLNAME> [<replica> <COUNT> [arbiter <COUNT>]]
> <NEW-BRICK> ... [force]
>
> gluster vol info
> pve01:~# gluster vol info
>
> Volume Name: VMS
> Type: Distributed-Replicate
> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/disco2TB-0/vms
> Brick2: gluster2:/disco2TB-0/vms
> Brick3: gluster1:/disco1TB-0/vms
> Brick4: gluster2:/disco1TB-0/vms
> Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> performance.client-io-threads: off
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> cluster.granular-entry-heal: on
> cluster.data-self-heal: off
> cluster.metadata-self-heal: off
> cluster.entry-self-heal: off
> cluster.self-heal-daemon: off
>
> What am I doing wrong?
>
>
>
>
> ---
>
>
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em qua., 6 de nov. de 2024 às 11:32, Strahil Nikolov <
> hunter86_bg at yahoo.com> escreveu:
>
>> Right now you have 3 "sets" of replica 2 on 2 hosts.
>> In your case you don't need so much space for arbiters (10-15GB with 95
>> maxpct is enough for each "set") and you need a 3rd system or when the node
>> that holds the data brick + arbiter brick fails (2 node scenario) - that
>> "set" will be unavailable.
>>
>> If you do have a 3rd host, I think the command would be:
>> gluster volume add-brick VOLUME replica 2 arbiter 1
>> server3:/first/set/arbiter server3:/second/set/arbiter
>> server3:/last/set/arbiter
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Tue, Nov 5, 2024 at 21:17, Gilberto Ferreira
>> <gilberto.nunes32 at gmail.com> wrote:
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241106/adebbdc7/attachment.html>


More information about the Gluster-users mailing list