[Gluster-users] Add an arbiter when have multiple bricks at    same server.

Gilberto Ferreira gilberto.nunes32 at gmail.com
Tue Nov 5 17:33:09 UTC 2024


Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore

Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:

> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume create VMS replica 3 arbiter 1 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms arbiter:/arbiter1
>
> To make this Volume as distributed Arbiter, add more bricks (multiple of
> 3, two data bricks and one arbiter brick) similar to above.
>
> --
> Aravinda
>
>
> ---- On Tue, 05 Nov 2024 22:24:38 +0530 *Gilberto Ferreira
> <gilberto.nunes32 at gmail.com <gilberto.nunes32 at gmail.com>>* wrote ---
>
> Clearly I am doing something wrong
>
> pve01:~# gluster vol info
>
> Volume Name: VMS
> Type: Distributed-Replicate
> Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/disco2TB-0/vms
> Brick2: gluster2:/disco2TB-0/vms
> Brick3: gluster1:/disco1TB-0/vms
> Brick4: gluster2:/disco1TB-0/vms
> Brick5: gluster1:/disco1TB-1/vms
> Brick6: gluster2:/disco1TB-1/vms
> Options Reconfigured:
> cluster.self-heal-daemon: off
> cluster.entry-self-heal: off
> cluster.metadata-self-heal: off
> cluster.data-self-heal: off
> cluster.granular-entry-heal: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> performance.client-io-threads: off
> pve01:~# gluster vol add-brick VMS replica 3 arbiter 1
> gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms
> gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:
> /disco1TB-1/vms arbiter:/arbiter1
> volume add-brick: failed: Operation failed
> pve01:~# gluster vol add-brick VMS replica 3 arbiter 1
> gluster1:/disco2TB-0/vms gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms
> gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms gluster2:
> /disco1TB-1/vms arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
> arbiter:/arbiter4
> volume add-brick: failed: Operation failed
>
> ---
>
>
> Gilberto Nunes Ferreira
>
>
>
>
>
>
>
> Em ter., 5 de nov. de 2024 às 13:39, Andreas Schwibbe <a.schwibbe at gmx.net>
> escreveu:
>
>
> If you create a volume with replica 2 arbiter 1
>
> you create 2 data bricks that are mirrored (makes 2 file copies)
> +
> you create 1 arbiter that holds metadata of all files on these bricks.
>
> You "can" create all on the same server, but this makes no sense, because
> when the server goes down, no files on these disks are accessible anymore,
> hence why bestpractice is to spread out over 3 servers, so when one server
> (or disk) goes down, you will still have 1 file copy and 1 arbiter with
> metadata online.
> Which is also very handy when the down server comes up again, because then
> you prevent splitbrain as you have matching file copy + metadata showing
> which version of each file is newest, thus self-heal can jump in to get you
> back to 2 file copies.
>
> when you want to add further bricks, you must add pairs i.e.
> you will add again 2 bricks 1 arbiter and these bricks and arbiter belong
> together and share the same files and metadata.
>
> Hth.
> A.
>
>
> Am Dienstag, dem 05.11.2024 um 13:28 -0300 schrieb Gilberto Ferreira:
>
> Ok.
> I got confused here!
> For each brick I will need one arbiter brick, in a different
> partition/folder?
> And what if in some point in the future I decide to add a new brick in the
> main servers?
> Do I need to provide another partition/folder in the arbiter and then
> adjust the arbiter brick counter?
>
> ---
>
>
> Gilberto Nunes Ferreira
>
>
>
>
>
>
>
> Em ter., 5 de nov. de 2024 às 13:22, Andreas Schwibbe <a.schwibbe at gmx.net>
> escreveu:
>
> Your add-brick command adds 2 bricks 1 arbiter (even though you name them
> all arbiter!)
>
> The sequence is important:
>
> gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0 gluster2:/gv0
> arbiter1:/arb1
>
> adds two data bricks and a corresponding arbiter from 3 different servers
> and 3 different disks,
> thus you can loose any one server OR any one disk and stay up and
> consistent.
>
> adding more bricks to the volume you can follow the pattern.
>
> A.
>
> Am Dienstag, dem 05.11.2024 um 12:51 -0300 schrieb Gilberto Ferreira:
>
> Hi there.
>
> In previous emails, I comment with you guys, about 2 node gluster server,
> where the bricks lay down in different size and folders in the same server,
> like
>
> gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
> gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms gluster2:/disco1TB-0/vms
> gluster1:/disco1TB-1/vms gluster2:/disco1TB-1/vms
>
> So I went ahead and installed a Debian 12 and installed the same gluster
> version that the other servers, which is now 11.1 or something like that.
> In this new server, I have a small disk like 480G in size.
> And I created 3 partitions formatted with XFS using imaxpct=75, as
> suggested in previous emails.
>
> And than in the gluster nodes, I tried to add the brick
> gluster vol add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1/arbiter1
> arbiter:/arbiter2/arbiter2 arbiter:/arbiter3/arbiter3
>
> But to my surprise (or not!) I got this message:
> volume add-brick: failed: Multiple bricks of a replicate volume are
> present on the same server. This setup is not optimal. Bricks should be on
> different nodes to have best fault tolerant co
> nfiguration. Use 'force' at the end of the command if you want to override
> this behavior.
>
> Why is that?
>
>
>
>
>
>
> ---
>
>
> Gilberto Nunes Ferreira
>
>
>
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241105/b4c8c9db/attachment.html>


More information about the Gluster-users mailing list