[Gluster-users] Glusterfs 4.1.6

Amudhan P amudhan83 at gmail.com
Thu Jan 3 10:55:58 UTC 2019


I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a
faulty disk and below are the steps I did but wasn't successful with that.

3 Nodes, 2 disks per node, Disperse Volume 4+2 :-
Step 1 :- kill pid of the faulty brick in node
Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
Step 3 :- replace disk and mount new disk in same mount point where the old
disk was mounted
Step 4 :- run command "gluster v start volname force"
Step 5 :- running volume status,  shows "N/A" under 'pid' & 'TCP port'

expected behavior was a new brick process & heal should have started.

following above said steps 3.10.1 works perfectly, starting a new brick
process and heal begins.
But the same step not working in 4.1.6, Did I miss any steps? what should
be done?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190103/af090a00/attachment.html>

More information about the Gluster-users mailing list