[Gluster-users] remove-brick: sanity check

Ravishankar N ravishankar at redhat.com
Thu Aug 15 05:37:25 UTC 2013

On 08/14/2013 06:58 PM, Pete Smith wrote:
> Hello Gluster peeps
> Just sanity checking the procedure for removing bricks ...
> We're on v3.2.7, with four nodes (g1, g2, g3, g4), three bricks on
> each node. The first two bricks across all nodes form a replicated
> filesystem (gv1), the third brick a distributed filesystem (gv2).
> The plan is to bring down the usage on the filesystems to below 50%,
> remove the bricks from nodes three and four, and when the rebalance is
> complete, remove nodes three and four from the cluster.
> I've been reading the docs, and all seems to make sense. But I have
> some questions:
Hello Pete,
> 1. For replicated volumes, removing bricks _should_ be fine. ?
This should be okay but it would be wise not to create files from the 
mount point when remove-brick is being done to ensure that the bricks 
being removed are not the _only_ ones containing the healthy copy of the 
> 2. For distributed volumes, how do I make sure that data is moved to
> the bricks that I'm not going to remove?
glusterfs v3.2 doesn't seem to support [1] accessing data from the mount 
point for removed bricks. The 3.3 version and upwards support migration 
of data via the remove-brick {start|status|commit) command sequence.[2]
You should really upgrade  to the latest v3.4 :-)

What you could try with 3.2 is to manually copy the files (while 
retaining the directory hierarchy) from the bricks of g3 and g4 into g1 
(or g2) and run the rebalance command on the volume. While this seemed 
to work when I tried once, I am not sure of the correctness of this 



> Any pointers appreciated.
> Thanks.

More information about the Gluster-users mailing list