<div dir="ltr">Hi,<div><br></div><div>I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.</div><div><br></div><div>3 Nodes, 2 disks per node, Disperse Volume 4+2 :-</div><div>Step 1 :- kill pid of the faulty brick in node </div><div>Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'</div><div>Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted </div><div>Step 4 :- run command "gluster v start volname force"</div><div>Step 5 :- running volume status, shows "N/A" under '<span class="gmail-gr_ gmail-gr_778 gmail-gr-alert gmail-gr_spell gmail-gr_inline_cards gmail-gr_disable_anim_appear gmail-ContextualSpelling gmail-ins-del gmail-multiReplace" id="gmail-778" style="display:inline;border-bottom:2px solid transparent;background-repeat:no-repeat">pid</span>' & 'TCP port'</div><div><br></div><div>expected behavior was a new brick process & heal should have started.</div><div><br></div><div>following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.</div><div>But the same step not working in 4.1.6, Did I miss any steps? what should be done?</div><div><br></div><div>Amudhan<br></div></div>