[Gluster-users] shrinking volume

harry mangalam hjmangalam at gmail.com
Thu Jun 21 16:43:15 UTC 2012


In 3.3, this is exactly what 'remove-brick' does.  It migrates the data
off an active volume, and when it's done, allows the removed brick to be
upgraded, shut down, killed off etc.

gluster volume  remove-brick <vol> server:/brick  start

(takes a while to start up, but then goes fairly rapidly.)

following is the result of a recent remove-brick I did with 3.3

% gluster volume  remove-brick gl bs1:/raid1  status
     Node Rebalanced-files          size       scanned      failures
status
---------      -----------   -----------   -----------   -----------
------------
localhost                2  10488397779           12            0    in
progress
      bs2                0            0            0            0    not
started
      bs3                0            0            0            0    not
started
      bs4                0            0            0            0    not
started

(time passes )

$ gluster volume  remove-brick gl bs1:/raid2  status
                                    Node Rebalanced-files          size
scanned      failures         status
                               ---------      -----------   -----------
-----------   -----------   ------------
                               localhost              952  26889337908
8306            0      completed



Note that once the 'status' says completed, you need to issue the
remove-brick command again to actually finalize the operation.

And that 'remove-brick' command will not clear the dir structure on the
removed brick.


On Thu, 2012-06-21 at 12:29 -0400, Brian Cipriano wrote:
> Hi all - is there a safe way to shrink an active gluster volume without 
> losing files?
> 
> I've used remove-brick before, but this causes the files on that brick 
> to be removed from the volume. Which is fine for some situations.
> 
> But I'm trying to remove a brick without losing files. This is because 
> our file usage can grow dramatically over short periods. During those 
> times we add a lot of buffer to our gluster volume, to keep it at about 
> 50% usage. After things settle down and file usage isn't changing as 
> much, we'd like to remove some bricks in order to keep usage at about 
> 80%. (These bricks are AWS EBS volumes - we want to remove the bricks to 
> save a little $ when things are slow.)
> 
> So what I'd like to do is the following. This is a simple distributed 
> volume, no replication.
> 
> * Let gluster know I want to remove a brick
> * No new files will go to that brick
> * Gluster starts copying files from that brick to other bricks, 
> essentially rebalancing the data
> * Once all files have been duplicated onto other bricks, the brick is 
> marked as "removed" and I can do a normal remove-brick
> * Over the course of this procedure the files are always available 
> because there's always at least one active copy of every file
> 
> This procedure seems very similar to replace-brick, except the goal 
> would be to evenly distribute to all other active bricks (without 
> interfering with pre-exiting files), not one new brick.
> 
> Is there any way to do this?
> 
> I *could* just do my remove-brick, then manually distribute the files 
> from that old brick back onto the volume, but that would cause those 
> files to become unavailable for some amount of time.
> 
> Many thanks for all your help,
> 
> - brian
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list