<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>comments inline<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Hu Bert" <revirii@googlemail.com><br><b>To: </b>"Ashish Pandey" <aspandey@redhat.com><br><b>Cc: </b>"Gluster Users" <gluster-users@gluster.org><br><b>Sent: </b>Monday, January 7, 2019 12:41:29 PM<br><b>Subject: </b>Re: [Gluster-users] Glusterfs 4.1.6<br><div><br></div>Hi Ashish & all others,<br><div><br></div>if i may jump in... i have a little question if that's ok?<br>replace-brick and reset-brick are different commands for 2 distinct<br>problems? I once had a faulty disk (=brick), it got replaced<br>(hot-swap) and received the same identifier (/dev/sdd again); i<br>followed this guide:<br><div><br></div>https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/<br>-->> "Replacing bricks in Replicate/Distributed Replicate volumes"<br><div><br></div>If i unterstand it correctly:<br><div><br></div>- using replace-brick is for "i have an additional disk and want to<br>move data from existing brick to new brick", old brick gets removed<br>from volume and new brick gets added to the volume.<br>- reset-brick is for "one of my hdds crashed and it will be replaced<br>by a new one", the brick name stays the same.<br><div><br></div>did i get that right? If so: holy smokes... then i misunderstood this<br>completly (sorry @Pranith&Xavi). The wording is a bit strange here...<br><div><br></div><div>>>>>>>>>>>>>>>>>>>>>>>>>><br></div><div>Yes, your understanding is correct. In addition to above, one more use of reset-brick -<br></div><div>If you want to change hostname of your server and bricks are having hostname, then you can use reset-brick to change from hostname to Ip address and then change the <br></div><div>hostname of the server.<br></div><div>In short, whenever you want to change something on one of the brick while location and mount point are same, you should use reset-brick <br></div><div>>>>>>>>>>>>>>>>>>>>>>>>>></div><div><br></div><div><br></div><div><br></div>Thx<br>Hubert<br><div><br></div>Am Do., 3. Jan. 2019 um 12:38 Uhr schrieb Ashish Pandey <aspandey@redhat.com>:<br>><br>> Hi,<br>><br>> Some of the the steps provided by you are not correct.<br>> You should have used reset-brick command which was introduced for the same task you wanted to do.<br>><br>> https://docs.gluster.org/en/v3/release-notes/3.9.0/<br>><br>> Although your thinking was correct but replacing a faulty disk requires some of the additional task which this command<br>> will do automatically.<br>><br>> Step 1 :- kill pid of the faulty brick in node >>>>>> This should be done using "reset-brick start" command. follow the steps provided in link.<br>> Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted<br>> Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This should be done using "reset-brick commit force" command. This will trigger the heal. Follow the link.<br>> Step 5 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>><br>> ---<br>> Ashish<br>><br>> ________________________________<br>> From: "Amudhan P" <amudhan83@gmail.com><br>> To: "Gluster Users" <gluster-users@gluster.org><br>> Sent: Thursday, January 3, 2019 4:25:58 PM<br>> Subject: [Gluster-users] Glusterfs 4.1.6<br>><br>> Hi,<br>><br>> I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.<br>><br>> 3 Nodes, 2 disks per node, Disperse Volume 4+2 :-<br>> Step 1 :- kill pid of the faulty brick in node<br>> Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted<br>> Step 4 :- run command "gluster v start volname force"<br>> Step 5 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>><br>> expected behavior was a new brick process & heal should have started.<br>><br>> following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.<br>> But the same step not working in 4.1.6, Did I miss any steps? what should be done?<br>><br>> Amudhan<br>><br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> https://lists.gluster.org/mailman/listinfo/gluster-users<br>><br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> https://lists.gluster.org/mailman/listinfo/gluster-users<br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users<br></div><div><br></div></div></body></html>