[Gluster-users] Shrinking gluster filesystem in 3.6.2

Ravishankar N ravishankar at redhat.com
Wed Apr 1 16:38:57 UTC 2015



On 03/31/2015 05:12 AM, Lilley, John F. wrote:
> Hi,
>
> I'd like to shrink my aws/ebs-based *distribute-only* gluster file 
> system by migrating the data to other already existing, active and 
> partially utilized bricks but found the 'replace-brick start' 
> mentioned in the documentation is now deprecated.  I see that there 
> has been some back and forth on the mailing list regarding migrating 
> data using self-heal on a replicated system but not so much on a 
> distribute-only file system. Can anyone tell me the blessed way of 
> doing this in 3.6.2? Is there one?
>
> To be clear, all of the ebs-based bricks are partially utilized at 
> this point so I'd need a method to migrate the data first.
>

If I understand you correctly, you want to replace a brick in a 
distribute volume with one of lesser capacity. You could first add a new 
brick and then remove the existing brick with remove-brick 
start/status/commit sequence. Something like this:
------------------------------------------------
[root at tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick1
Brick2: 127.0.0.2:/home/ravi/bricks/brick2
Brick3: 127.0.0.2:/home/ravi/bricks/brick3
[root at tuxpad ~]#
[root at tuxpad ~]#
[root at tuxpad ~]# gluster volume add-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{4..6}
volume add-brick: success
[root at tuxpad ~]#
[root at tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick1
Brick2: 127.0.0.2:/home/ravi/bricks/brick2
Brick3: 127.0.0.2:/home/ravi/bricks/brick3
Brick4: 127.0.0.2:/home/ravi/bricks/brick4
Brick5: 127.0.0.2:/home/ravi/bricks/brick5
Brick6: 127.0.0.2:/home/ravi/bricks/brick6
[root at tuxpad ~]#
[root at tuxpad ~]#
[root at tuxpad ~]#
[root at tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} start
volume remove-brick start: success
ID: d535675e-8362-4a44-a291-1e567a77531e
[root at tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} status
                                     Node Rebalanced-files          
size       scanned      failures skipped               status   run time 
in secs
                                ---------      ----------- -----------   
-----------   -----------   ----------- ------------     --------------
                                localhost 10        0Bytes            
20             0 0 *completed *              0.00
[root at tuxpad ~]#
[root at tuxpad ~]# gluster v remove-brick testvol 
127.0.0.2:/home/ravi/bricks/brick{1..3} commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
*Check the removed bricks to ensure all files are migrated.**
**If files with data are found on the brick path, copy them via a 
gluster mount point before re-purposing the removed brick. *
[root at tuxpad ~]#
[root at tuxpad ~]# gluster volume info testvol

Volume Name: testvol
Type: Distribute
Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/home/ravi/bricks/brick4
Brick2: 127.0.0.2:/home/ravi/bricks/brick5
Brick3: 127.0.0.2:/home/ravi/bricks/brick6
[root at tuxpad ~]#
------------------------------------------------
Hope this helps.
Ravi

> Thank You,
> John
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150401/02fdb403/attachment.html>


More information about the Gluster-users mailing list