[Gluster-users] Add an arbiter when have multiple bricks at same server.

Andreas Schwibbe a.schwibbe at gmx.net
Tue Nov 5 16:39:35 UTC 2024


If you create a volume with replica 2 arbiter 1

you create 2 data bricks that are mirrored (makes 2 file copies)
+
you create 1 arbiter that holds metadata of all files on these bricks.

You "can" create all on the same server, but this makes no sense,
because when the server goes down, no files on these disks are
accessible anymore,
hence why bestpractice is to spread out over 3 servers, so when one
server (or disk) goes down, you will still have 1 file copy and 1
arbiter with metadata online.
Which is also very handy when the down server comes up again, because
then you prevent splitbrain as you have matching file copy + metadata
showing which version of each file is newest, thus self-heal can jump
in to get you back to 2 file copies.

when you want to add further bricks, you must add pairs i.e.
you will add again 2 bricks 1 arbiter and these bricks and arbiter
belong together and share the same files and metadata.

Hth.
A.


Am Dienstag, dem 05.11.2024 um 13:28 -0300 schrieb Gilberto Ferreira:
> Ok.
> I got confused here!
> For each brick I will need one arbiter brick, in a different
> partition/folder?
> And what if in some point in the future I decide to add a new brick
> in the main servers?
> Do I need to provide another partition/folder in the arbiter and then
> adjust the arbiter brick counter?
> 
> ---
> 
> 
> Gilberto Nunes Ferreira
> 
> 
> 
> 
> 
> 
> Em ter., 5 de nov. de 2024 às 13:22, Andreas Schwibbe
> <a.schwibbe at gmx.net> escreveu:
> > Your add-brick command adds 2 bricks 1 arbiter (even though you
> > name them all arbiter!)
> > 
> > The sequence is important:
> > 
> > gluster v add-brick VMS replica 2 arbiter 1 gluster1:/gv0
> > gluster2:/gv0 arbiter1:/arb1
> > 
> > adds two data bricks and a corresponding arbiter from 3 different
> > servers and 3 different disks, 
> > thus you can loose any one server OR any one disk and stay up and
> > consistent.
> > 
> > adding more bricks to the volume you can follow the pattern.
> > 
> > A.
> > 
> > Am Dienstag, dem 05.11.2024 um 12:51 -0300 schrieb Gilberto
> > Ferreira:
> > > Hi there.
> > > 
> > > In previous emails, I comment with you guys, about 2 node gluster
> > > server, where the bricks lay down in different size and folders
> > > in the same server, like
> > > 
> > > gluster vol create VMS replica 2 gluster1:/disco2TB-0/vms
> > > gluster2:/disco2TB-0/vms gluster1:/disco1TB-0/vms
> > > gluster2:/disco1TB-0/vms gluster1:/disco1TB-1/vms
> > > gluster2:/disco1TB-1/vms
> > > 
> > > So I went ahead and installed a Debian 12 and installed the same
> > > gluster version that the other servers, which is now 11.1 or
> > > something like that.
> > > In this new server, I have a small disk like 480G in size.
> > > And I created 3 partitions formatted with XFS using imaxpct=75,
> > > as suggested in previous emails.
> > > 
> > > And than in the gluster nodes, I tried to add the brick
> > > gluster vol add-brick VMS replica 3 arbiter 1
> > > arbiter:/arbiter1/arbiter1 arbiter:/arbiter2/arbiter2
> > > arbiter:/arbiter3/arbiter3
> > > 
> > > But to my surprise (or not!) I got this message:
> > > volume add-brick: failed: Multiple bricks of a replicate volume
> > > are present on the same server. This setup is not optimal. Bricks
> > > should be on different nodes to have best fault tolerant co
> > > nfiguration. Use 'force' at the end of the command if you want to
> > > override this behavior. 
> > > 
> > > Why is that?
> > > 
> > >  
> > > 
> > > 
> > > 
> > > 
> > > ---
> > > 
> > > 
> > > Gilberto Nunes Ferreira
> > > 
> > > 
> > > 
> > > 
> > > ________
> > > 
> > > 
> > > 
> > > Community Meeting Calendar:
> > > 
> > > Schedule -
> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > Bridge: https://meet.google.com/cpu-eiue-hvk
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> > 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20241105/44a5a77a/attachment.html>


More information about the Gluster-users mailing list