[Gluster-users] File Corruption when adding bricks to live replica volumes
Lindsay Mathieson
lindsay.mathieson at gmail.com
Tue Jan 19 12:41:52 UTC 2016
On 19/01/2016 10:06 PM, Krutika Dhananjay wrote:
> Just to be sure we are not missing any steps here, you did invoke
> 'gluster volume heal datastore1 full' after adding the third brick,
> before the heal could begin, right?
Possibly not. First I immediately ran 'gluster volume heal datastore1
info' which showed the oddball heal in progress. Then I ran the 'heal
full' which didn't change anything (on the highest uuid node :))
>
> As far as the reverse heal is concerned, there is one issue with
> add-brick where replica count is increased, which is still under review.
> Could you instead try the following steps at the time of add-brick and
> tell me if it works fine:
>
> 1. Run 'gluster volume add-brick datastore1 replica 3
> vng.proxmox.softlog:/vmdata/datastore1' as usual.
>
> 2. Kill the glusterfsd process corresponding to newly added brick (the
> brick in vng in your case). You should be able to get its pid in the
> output of 'gluster volume status datastore1'.
> 3. Create a dummy file on the root of the volume from the mount point.
> This can be any random name.
> 4. Delete the dummy file created in step 3.
> 5. Bring the killed brick back up. For this, you can run 'gluster
> volume start datastore1 force'.
> 6. Then execute 'gluster volume heal datastore1 full' on the node with
> the highest uuid (this we know how to do from the previous thread on
> the same topic).
>
> Then monitor heal-info output to track heal progress.
> Let me know if this works.
Will do - not right now, have to go to bed :) but will let you know
tomorrow.
Thanks,
--
Lindsay Mathieson
More information about the Gluster-users
mailing list