<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Hu Bert" <revirii@googlemail.com><br><b>To: </b>"Ashish Pandey" <aspandey@redhat.com><br><b>Cc: </b>"Gluster Users" <gluster-users@gluster.org><br><b>Sent: </b>Monday, January 7, 2019 1:28:22 PM<br><b>Subject: </b>Re: [Gluster-users] Glusterfs 4.1.6<br><div><br></div>Hi,<br><div><br></div>thx Ashish for the clarification. Just another question... so the<br>commands in case of a hdd (lets say sdd) failure and identical brick<br>paths (mount: /gluster/bricksdd1) should look like this:<br><div><br></div>gluster volume reset-brick $volname /gluster/bricksdd1 start<br>>> change hdd, create partition & filesystem, mount <<<br>gluster volume reset-brick $volname $host:/gluster/bricksdd1<br>$host:/gluster/bricksdd1 commit force<br><div><br></div><div>>>> Correct.<br></div><div><br></div>Is it possible to change the mountpoint/brick name with this command?<br>In my case:<br>old: /gluster/bricksdd1_new<br>new: /gluster/bricksdd1<br>i.e. only the mount point is different.<br><div><br></div>gluster volume reset-brick $volname $host:/gluster/bricksdd1_new<br>$host:/gluster/bricksdd1 commit force<br><div><br></div>I would try to:<br>- gluster volume reset-brick $volname $host:/gluster/bricksdd1_new start<br>- reformat sdd etc.<br>- gluster volume reset-brick $volname $host:/gluster/bricksdd1_new<br>$host:/gluster/bricksdd1 commit force<br><div><br></div><div>>>> I think it is not possible. At least this is what we tested during and after development.</div><div>We would consider the above case as replace-brick and not as reset-brick.<br></div><div><br></div>thx<br>Hubert<br><div><br></div>Am Mo., 7. Jan. 2019 um 08:21 Uhr schrieb Ashish Pandey <aspandey@redhat.com>:<br>><br>> comments inline<br>><br>> ________________________________<br>> From: "Hu Bert" <revirii@googlemail.com><br>> To: "Ashish Pandey" <aspandey@redhat.com><br>> Cc: "Gluster Users" <gluster-users@gluster.org><br>> Sent: Monday, January 7, 2019 12:41:29 PM<br>> Subject: Re: [Gluster-users] Glusterfs 4.1.6<br>><br>> Hi Ashish & all others,<br>><br>> if i may jump in... i have a little question if that's ok?<br>> replace-brick and reset-brick are different commands for 2 distinct<br>> problems? I once had a faulty disk (=brick), it got replaced<br>> (hot-swap) and received the same identifier (/dev/sdd again); i<br>> followed this guide:<br>><br>> https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/<br>> -->> "Replacing bricks in Replicate/Distributed Replicate volumes"<br>><br>> If i unterstand it correctly:<br>><br>> - using replace-brick is for "i have an additional disk and want to<br>> move data from existing brick to new brick", old brick gets removed<br>> from volume and new brick gets added to the volume.<br>> - reset-brick is for "one of my hdds crashed and it will be replaced<br>> by a new one", the brick name stays the same.<br>><br>> did i get that right? If so: holy smokes... then i misunderstood this<br>> completly (sorry @Pranith&Xavi). The wording is a bit strange here...<br>><br>> >>>>>>>>>>>>>>>>>>>>>>>>><br>> Yes, your understanding is correct. In addition to above, one more use of reset-brick -<br>> If you want to change hostname of your server and bricks are having hostname, then you can use reset-brick to change from hostname to Ip address and then change the<br>> hostname of the server.<br>> In short, whenever you want to change something on one of the brick while location and mount point are same, you should use reset-brick<br>> >>>>>>>>>>>>>>>>>>>>>>>>><br>><br>><br>><br>> Thx<br>> Hubert<br>><br>> Am Do., 3. Jan. 2019 um 12:38 Uhr schrieb Ashish Pandey <aspandey@redhat.com>:<br>> ><br>> > Hi,<br>> ><br>> > Some of the the steps provided by you are not correct.<br>> > You should have used reset-brick command which was introduced for the same task you wanted to do.<br>> ><br>> > https://docs.gluster.org/en/v3/release-notes/3.9.0/<br>> ><br>> > Although your thinking was correct but replacing a faulty disk requires some of the additional task which this command<br>> > will do automatically.<br>> ><br>> > Step 1 :- kill pid of the faulty brick in node >>>>>> This should be done using "reset-brick start" command. follow the steps provided in link.<br>> > Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> > Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted<br>> > Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This should be done using "reset-brick commit force" command. This will trigger the heal. Follow the link.<br>> > Step 5 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> ><br>> > ---<br>> > Ashish<br>> ><br>> > ________________________________<br>> > From: "Amudhan P" <amudhan83@gmail.com><br>> > To: "Gluster Users" <gluster-users@gluster.org><br>> > Sent: Thursday, January 3, 2019 4:25:58 PM<br>> > Subject: [Gluster-users] Glusterfs 4.1.6<br>> ><br>> > Hi,<br>> ><br>> > I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.<br>> ><br>> > 3 Nodes, 2 disks per node, Disperse Volume 4+2 :-<br>> > Step 1 :- kill pid of the faulty brick in node<br>> > Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> > Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted<br>> > Step 4 :- run command "gluster v start volname force"<br>> > Step 5 :- running volume status, shows "N/A" under 'pid' & 'TCP port'<br>> ><br>> > expected behavior was a new brick process & heal should have started.<br>> ><br>> > following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.<br>> > But the same step not working in 4.1.6, Did I miss any steps? what should be done?<br>> ><br>> > Amudhan<br>> ><br>> > _______________________________________________<br>> > Gluster-users mailing list<br>> > Gluster-users@gluster.org<br>> > https://lists.gluster.org/mailman/listinfo/gluster-users<br>> ><br>> > _______________________________________________<br>> > Gluster-users mailing list<br>> > Gluster-users@gluster.org<br>> > https://lists.gluster.org/mailman/listinfo/gluster-users<br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> https://lists.gluster.org/mailman/listinfo/gluster-users<br>><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users<br></div><div><br></div></div></body></html>