[Gluster-users] Replica bricks fungible?

Zenon Panoussis oracle at provocation.net
Wed Jun 9 23:14:00 UTC 2021


> it will require quite a lot of time to *rebalance*...

(my emphasis on "rebalance"). Just to avoid any misunderstandings,
I am talking about pure replica. No distributed replica and no
arbitrated replica. I guess that moving bricks would also work
on a distributed replica within, but not outside, each replica,
but that's only a guess.

> Have you documented the procedure you followed?

I did several different things. I moved a brick from one path
to another on the same server, and I also moved a brick from
one server to another. The procedure in both cases is the same.

# gluster volume heal gv0 statistics heal-count

If all heal count "number of entries" are 0,

# ssh root@{node01,node02,node03} "systemctl stop glusterd"

(This is to prevent any writing to any node while copy/move
operations are ongoing. It's not necessary if you have umounted
all the clients.)

# ssh root at node04
# rsync -vvaz --progress node01:/gfsroot/gv0 /gfsroot/

node04 in the above example is the new node. It could also be
a new brick on an existing node, like

# mount /dev/sdnewdisk1 /gfsnewroot
# rsync -vva --progress /gfsroot/gv0 /gfsnewroot/

Once you have a full copy of the old brick on the new location,
you can just

# ssh root@{node01,node02,node03,node04} "systemctl start glusterd"
# gluster volume add-brick gv0 replica 4 node04:/gfsroot
# gluster vol status
# gluster volume remove-brick gv0 replica 3 node01:/gfsroot

In this example I use add-brick first, before remove-brick, so
as to avoid the theoretical risk of split-brain of a 3-brick
volume if it is momentarily left with only two bricks. In real
life you will either have many more bricks than three, or you
will have kicked out all clients before this procedure, so the
order of add and remove won't matter.



More information about the Gluster-users mailing list