[Gluster-users] Proper procedure to reduce an active volume
Diego Zuccato
diego.zuccato at unibo.it
Thu Feb 4 07:05:36 UTC 2021
Il 03/02/21 18:15, Strahil Nikolov ha scritto:
Tks for the fast answer.
> Replica volumes do not require the 'start + commit' - it's needed only
> for distributed replicated volumes and other types of volumes.
> Yet, I'm not sure if removing a data brick (and keeping the arbiter)
> makes any sense. Usually, I just remove 1 data copy + the arbiter to
> reshape the volume.
Well, actually I need to remove both data bricks and the arbiters w/o
losing the data. Probably that wasn't clear, sorry.
The current pods have 28x10TB disks and all the arbiters are on a VM.
The new pod does have only 26 disks.
What I want to do is remove one disk from each of the current pods, move
one of the freed disks to the new pod (this way each pod will have 27
disks and I'll have a cold spare to quickly replace a failed disk) and
distribute the arbiters between the three pods to dismiss the VM.
If possible, I'd prefer to keep redundancy (hence not going to replica 1
in an intermediate step).
> Keep in mind that as you remove a brick you need to specify the new
> replica count.
> For example you have 'replica 3 arbiter 1' and you want to remove the
> second copy and the arbiter:
> gluster volume remove-brick <VOLUME> replica 1 server2:/path/to/brick
> arbiter:/path/to/brick force
That's what I want to avoid :)
I need to migrate data out of s1:/bricks/27, s2:/bricks/27 and
s3:/arbiters/27 redistributing it to the remaining bricks.
BTW, isn't replica count an attribute of the whole volume?
> If you wish to reuse block devices, don't forget to rebuild the FS (as
> it's fastest way to cleanup)!
Yup. Already been bitten by eas :)
> When you increase the count (add second data brick and maybe arbiter),
> you should run:
> gluster volume add-brick <VOLUME> replica 3 arbiter 1
> server4:/path/to/brick arbiter2:/path/to/brick
> gluster volume heal <VOLUME> full
That will be useful when more disks will be added.
After removing the last bricks (isn't there a term for "all the
components of a replica set"? slice?) I thought I could move the
remaining bricks with replace-brick and keep a "rotating" distribution:
slice | s1 | s2 | s3
00 | b00 | b00 | a00 (vm.a00 -> s2.a00)
01 | a00 | b01 | b00 (s1.b01 -> s3.b00, vm.a01 -> s1.a00)
02 | b01 | a00 | b01 (s1.b02 -> s1.b01, s2.b02 -> s3.b01, vm.a02 ->
s2.a00)
[and so on]
That will take quite a long time (IIUC I cannot move to a brick being
moved to another... or at least it doesn't seem wise :) ).
It's probably faster to first move arbiters and then the data.
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
More information about the Gluster-users
mailing list