<div dir="ltr">Hi Strahil,<div><br></div><div>Thank you for sharing your experience with reset-brick option.</div><div>Since he is using the gluster version 3.7.6, we do not have the reset-brick [1] option implemented there. It is introduced in 3.9.0. He has to go with replace-brick with the force option if he wants to use the same path & name for the new brick. </div><div>Yes, it is recommended to have the new brick to be of the same size as that of the other bricks.</div><div><br></div><div>[1] <a href="https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command">https://docs.gluster.org/en/latest/release-notes/3.9.0/#introducing-reset-brick-command</a></div><div><br></div><div>Regards,</div><div>Karthik</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 10, 2019 at 10:31 PM Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I have used reset-brick - but I have just changed the brick layout.<br>
You may give it a try, but I guess you need your new brick to have same amount of space (or more).<br>
<br>
Maybe someone more experienced should share a more sound solution.<br>
<br>
Best Regards,<br>
Strahil NikolovOn Apr 10, 2019 12:42, Martin Toth <<a href="mailto:snowmailer@gmail.com" target="_blank">snowmailer@gmail.com</a>> wrote:<br>
><br>
> Hi all,<br>
><br>
> I am running replica 3 gluster with 3 bricks. One of my servers failed - all disks are showing errors and raid is in fault state.<br>
><br>
> Type: Replicate<br>
> Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a<br>
> Status: Started<br>
> Number of Bricks: 1 x 3 = 3<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: node1.san:/tank/gluster/gv0imagestore/brick1<br>
> Brick2: node2.san:/tank/gluster/gv0imagestore/brick1 <— this brick is down<br>
> Brick3: node3.san:/tank/gluster/gv0imagestore/brick1<br>
><br>
> So one of my bricks is totally failed (node2). It went down and all data are lost (failed raid on node2). Now I am running only two bricks on 2 servers out from 3.<br>
> This is really critical problem for us, we can lost all data. I want to add new disks to node2, create new raid array on them and try to replace failed brick on this node.<br>
><br>
> What is the procedure of replacing Brick2 on node2, can someone advice? I can’t find anything relevant in documentation.<br>
><br>
> Thanks in advance,<br>
> Martin<br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>