[Gluster-users] remove-brick: sanity check
Pete Smith
pete at realisestudio.com
Wed Aug 14 13:28:05 UTC 2013
Hello Gluster peeps
Just sanity checking the procedure for removing bricks ...
We're on v3.2.7, with four nodes (g1, g2, g3, g4), three bricks on
each node. The first two bricks across all nodes form a replicated
filesystem (gv1), the third brick a distributed filesystem (gv2).
The plan is to bring down the usage on the filesystems to below 50%,
remove the bricks from nodes three and four, and when the rebalance is
complete, remove nodes three and four from the cluster.
I've been reading the docs, and all seems to make sense. But I have
some questions:
1. For replicated volumes, removing bricks _should_ be fine. ?
2. For distributed volumes, how do I make sure that data is moved to
the bricks that I'm not going to remove?
Any pointers appreciated.
Thanks.
--
Pete Smith
DevOp/System Administrator
Realise Studio
12/13 Poland Street, London W1F 8QB
T. +44 (0)20 7165 9644
realisestudio.com
More information about the Gluster-users
mailing list