[Gluster-users] Proper procedure to reduce an active volume

Nag Pavan Chilakam nag.chilakam at gmail.com
Thu Feb 4 18:28:27 UTC 2021


On Wed, 3 Feb 2021, 18:33 Diego Zuccato, <diego.zuccato at unibo.it> wrote:

> Hello all.
>
> What is the proper procedure to reduce a "replica 3 arbiter 1" volume?
>
Can you kindly elaborate the volume configuration.  Is this a plain arbiter
volume or is it a distributed arbiter volume?
Please share the volume info so that we can help you better

> The procedure I've found is:
> 1) # gluster volume remove-brick VOLNAME BRICK start
> (repeat for each brick to be removed, but being a r3a1 should I remove
> both bricks and the arbiter in a single command or multiple ones?)
>
No , you can mention bricks of a distributed subvolume in one command. If
you are having a 1x(2+1a) volume , then you should mention only one brick.
Start by removing the arbiter brick

> 2) # gluster volume remove-brick VOLNAME BRICK status
> (to monitor migration)
> 3) # gluster volume remove-brick VOLNAME BRICK commit
> (to finalize the removal)
> 4) umount and reformat the freed (now unused) bricks
> Is this safe?

What is the actual need to remove bricks?
If you feel this volume is not needed anymore , then just delete the
volume, instead of going through each brick deletion

>
> And once the bricks are removed I'll have to distribute arbiters across
> the current two data servers and a new one (currently I'm using a
> dedicated VM just for the arbiters). But that's another pie :)
>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210204/ff829ecf/attachment.html>


More information about the Gluster-users mailing list