[Gluster-users] No healing after replacing a brick in a replicated volume

Mike Chen mike.scchen at gmail.com
Thu May 14 20:05:27 UTC 2015


Hi,
I'm testing the replicated volume wit a 3 VMs config:
gfs1:/export/sda3/brick
gfs2:/export/sda3/brick
gfsc as client

The volume name is gfs.
Gluster version in the test is 3.6.3, on CentOS 6.6.

A volume of 2 replica is made, and I try to simulate a brick fail by:
1. stop the glusterd and gluster processes on gfs1
2. unmount the brick
3. mkfs.xfs the brick
4. mount it back
5. start the gluster service
6. volume remove-brick gfs replica 1 gfs1:/export/sda3/brick force
7. volume add-brick gfs replica 2 gfs1:/export/sda3/brick

To this point, the "volume info gfs" shows the volume to be a 2-bricks
replicate volume, which is fine.
But the gluster somehow thinks the volume doesn't need healing.
Issue the "volume heal gfs full" did not heal the volume, data did not
copied from the gfs2 brick to gfs1.
Is the problem in the replace procedures or something else?
Please advise ;)

Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150515/a0deb6f5/attachment.html>


More information about the Gluster-users mailing list