[Gluster-users] [Gluster-devel] Replica 3 - how to replace failed node (peer)

RAFI KC rkavunga at redhat.com
Wed Apr 10 10:16:25 UTC 2019

reset brick is another way of replacing a brick. this usually helpful, 
when you want to replace the brick with same name. You can find the 
documentation here 

In your case, I think you can use replace brick. So you can initiate a 
reset-brick start, then you have to replace your failed disk and create 
new brick with same name . Once you have healthy disk and brick, you can 
commit the reset-brick.

Let's know if you have any question,

Rafi KC

On 4/10/19 3:39 PM, David Spisla wrote:
> Hello Martin,
> look here:
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/pdf/administration_guide/Red_Hat_Gluster_Storage-3.4-Administration_Guide-en-US.pdf
> on page 324. There is a manual how to replace a brick in case of a 
> hardware failure
> Regards
> David Spisla
> Am Mi., 10. Apr. 2019 um 11:42 Uhr schrieb Martin Toth 
> <snowmailer at gmail.com <mailto:snowmailer at gmail.com>>:
>     Hi all,
>     I am running replica 3 gluster with 3 bricks. One of my servers
>     failed - all disks are showing errors and raid is in fault state.
>     Type: Replicate
>     Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a
>     Status: Started
>     Number of Bricks: 1 x 3 = 3
>     Transport-type: tcp
>     Bricks:
>     Brick1: node1.san:/tank/gluster/gv0imagestore/brick1
>     Brick2: node2.san:/tank/gluster/gv0imagestore/brick1 <— this brick
>     is down
>     Brick3: node3.san:/tank/gluster/gv0imagestore/brick1
>     So one of my bricks is totally failed (node2). It went down and
>     all data are lost (failed raid on node2). Now I am running only
>     two bricks on 2 servers out from 3.
>     This is really critical problem for us, we can lost all data. I
>     want to add new disks to node2, create new raid array on them and
>     try to replace failed brick on this node.
>     What is the procedure of replacing Brick2 on node2, can someone
>     advice? I can’t find anything relevant in documentation.
>     Thanks in advance,
>     Martin
>     _______________________________________________
>     Gluster-users mailing list
>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     https://lists.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190410/45a493c1/attachment.html>

More information about the Gluster-users mailing list