[Gluster-users] distributed replicated volume - added bricks

Strahil hunter86_bg at yahoo.com
Sat Oct 26 09:00:31 UTC 2019


Hi Herb,

First check your data usage, as you need free space on the remaining bricks. You have to remove bricks multiple per replica set.
For replica 2 distributed volume you must specify 2 bricks at least , or any number multiple by 2 (2,4,6,8 bricks).

The removal consists of several actions .

1. Starting removal via :

gluster volume remove-brick <VOLNAME> <BRICKNAME> start

This actually only migrates  the data to the  rest of the bricks.

2. Check status  via:

gluster volume remove-brick <VOLNAME> <BRICKNAME> status

Once completed, check the logs for any rebalance  issuues.

You can verify data/inode count on the brick via:

gluster volume status <VOLNAME> detail


If everything is fine proceed with the actual removal:

# gluster volume remove-brick <VOLNAME> <BRICKNAME> commit


5. Last verify the volume status.

Source:
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#shrinking-volumes

Best Regards,
Strahil Nikolov
On Oct 25, 2019 20:26, Herb Burnswell <herbert.burnswell at gmail.com> wrote:
>
> Thank you for the reply Strahil.
>
> Unfortunately we did do the rebalance already so the data should be written across all brinks currently.  I'm fine with pulling these newly added bricks out of the volume.  However, is it as simple as pulling them out and the data will rebalance to the disks that are left?
>
> Thanks,
>
> HB 
>
> On Sat, Oct 19, 2019 at 4:13 PM Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>>
>> Most probably this means that data on 
>> Brick server1:/gluster_bricks/data3       49164     0          Y       4625 
>> Brick server1:/gluster_bricks/data4       49165     0          Y       4644
>>
>> is the same and when server1 goes down , you will have no access to the data on this set.
>> Same should be valid for :
>> Brick server1:/gluster_bricks/data5       49166     0          Y       5088 
>> Brick server1:/gluster_bricks/data6       49167     0          Y       5128 
>> Brick server2:/gluster_bricks/data3       49168     0          Y       22314
>> Brick server2:/gluster_bricks/data4       49169     0          Y       22345
>> Brick server2:/gluster_bricks/data5       49170     0          Y       22889
>> Brick server2:/gluster_bricks/data6       49171     0          Y       22932
>>
>> I would remove those bricks and add them again this type always specifying one bri
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191026/4e2a304e/attachment.html>


More information about the Gluster-users mailing list