<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hi,<br></div><div><br></div><div>Some of the the steps provided by you are not correct.<br></div><div>You should have used reset-brick command which was introduced for the same task you wanted to do.<br></div><div><br></div><div><a href="https://docs.gluster.org/en/v3/release-notes/3.9.0/"><a><a>https://docs.gluster.org/en/v3/release-notes/3.9.0/</a></a></a></div><div><br></div><div>Although your thinking was correct but replacing a faulty disk requires some of the additional task which this command </div><div>will do automatically.</div><div><br></div><div>Step 1 :- kill pid of the faulty brick in node >>>>>> This should be done using "reset-brick start" command. follow the steps provided in link.<br></div><div>Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'</div><div>Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted </div><div>Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This should be done using "reset-brick commit force" command. This will trigger the heal. Follow the link.<br></div><div>Step 5 :- running volume status, shows "N/A" under '<span class="gmail-gr_ gmail-gr_778 gmail-gr-alert gmail-gr_spell gmail-gr_inline_cards gmail-gr_disable_anim_appear gmail-ContextualSpelling gmail-ins-del gmail-multiReplace" id="gmail-778" style="display: inline; border-bottom: 2px solid transparent; background-repeat: no-repeat;" data-mce-style="display: inline; border-bottom: 2px solid transparent; background-repeat: no-repeat;">pid</span>' & 'TCP port'</div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Amudhan P" <amudhan83@gmail.com><br><b>To: </b>"Gluster Users" <gluster-users@gluster.org><br><b>Sent: </b>Thursday, January 3, 2019 4:25:58 PM<br><b>Subject: </b>[Gluster-users] Glusterfs 4.1.6<br><div><br></div><div dir="ltr">Hi,<div><br></div><div>I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.</div><div><br></div><div>3 Nodes, 2 disks per node, Disperse Volume 4+2 :-</div><div>Step 1 :- kill pid of the faulty brick in node </div><div>Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'</div><div>Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted </div><div>Step 4 :- run command "gluster v start volname force"</div><div>Step 5 :- running volume status, shows "N/A" under '<span class="gmail-gr_ gmail-gr_778 gmail-gr-alert gmail-gr_spell gmail-gr_inline_cards gmail-gr_disable_anim_appear gmail-ContextualSpelling gmail-ins-del gmail-multiReplace" id="gmail-778" style="display:inline;border-bottom:2px solid transparent;background-repeat:no-repeat" data-mce-style="display: inline; border-bottom: 2px solid transparent; background-repeat: no-repeat;">pid</span>' & 'TCP port'</div><div><br></div><div>expected behavior was a new brick process & heal should have started.</div><div><br></div><div>following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.</div><div>But the same step not working in 4.1.6, Did I miss any steps? what should be done?</div><div><br></div><div>Amudhan<br></div></div><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>