[Bugs] [Bug 1572534] New: Unable to replace faulty brick

bugzilla at redhat.com bugzilla at redhat.com
Fri Apr 27 09:12:18 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1572534

            Bug ID: 1572534
           Summary: Unable to replace faulty brick
           Product: GlusterFS
           Version: 4.0
         Component: glusterd
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: marcin.wyrembak at gmail.com
                CC: bugs at gluster.org



Description of problem:

I have a cluster replica 3.
I terminated one node, built a new one and re-added it to the pool with:
gluster peer probe <new_node>

I started replacing failed bricks with (at that time 2 healthy nodes were quite
busy, I had an rsync running):
gluster volume replace-brick vol_name <old_node>:/shares/red/brick
<new_node>:/shares/red/brick commit force

Replacement of one of the bricks failed, but when I checked with:
gluster vol info

Brick looks like it has been replaced
Bricks:
Brick1: <healthy_node1>:/shares/red/brick
Brick2: <new_node2>:/shares/red/brick
Brick3: <healthy_node3:/shares/red/brick

I attempted to detach terminated node with:
gluster peer detach <faulty_node>

peer detach: failed: One of the peers is probably down. Check with 'peer
status'


Steps to Reproduce:
It might be hard to reproduce as I've replaced nodes previously without any
issues, it seems that load on the box prevented to complete brick replacement
and now it's in a half broken state.

Any way to work around it?

Thanks

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list