[Gluster-users] Issues removing then adding a brick to a replica volume (Gluster 3.7.6)

Sahina Bose sabose at redhat.com
Thu Feb 25 08:24:49 UTC 2016



On 02/23/2016 04:34 PM, Lindsay Mathieson wrote:
> On 23/02/2016 8:29 PM, Sahina Bose wrote:
>> Late jumping into this thread, but curious -
>>
>> Is there a specific reason that you are removing and adding a brick? 
>> Will replace-brick not work for you?
>
>
> Testing procedures for replacing a failed brick (disk crash etc),
>

The recommended way for replacing brick in a replica volume is  - 
replace-brick <src brick path> <destination brick path> commit force
We found that the issues related to heal that you encountered with 
decreasing and increasing replica count do not exist here.

In case the entire host needs to be replaced (for instance re-installing 
host/reformatting disks- and assuming the brick directories are same as 
before), here is a flow that works. Can you check if this will solve 
your usecase?

(Follow steps 1-4 in case host3 has been re-installed, and 
/var/lib/glusterd re-initialized)

 1. Stop glusterd on host being replaced (say, host3)
 2. Check gluster peer status from working node to obtain previous UUID
    of host3
 3. Edit gluster UUID in /var/lib/glusterd/glusterd.info on host3 to
    previous UUID obtained in step 2.
 4. Copy peer info from a working peer to /var/lib/glusterd/peers
    (without the peer UUID of node being replaced, here host3)
 5. Create and remove a tmp dir at volume mount points
 6. Restart glusterd -- heal will start and the brick on replaced node
    should be synced automatically.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160225/c8325da5/attachment.html>


More information about the Gluster-users mailing list