[Bugs] [Bug 1316577] New: Why do we have to replace a failed brick with a brick mounted on a different mount point?

bugzilla at redhat.com bugzilla at redhat.com
Thu Mar 10 14:11:26 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1316577

            Bug ID: 1316577
           Summary: Why do we have to replace a failed brick with a brick
                    mounted on a different mount point?
           Product: GlusterFS
           Version: mainline
         Component: glusterd
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: pportant at redhat.com
                CC: bugs at gluster.org



We had a failed brick, bad disk.  We replaced the drive and are following the
documentation (See "Replacing brick in Replicate/Distributed Replicate volumes"
in
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick).

However this seems odd.

If I have a six node cluster, each with 1 brick, where I have three-way
replication, I'll end up data distributed across two bricks which in turn are
replicated three ways.

In this scenario, if a node goes down for a time and comes back, its brick will
get self-healed to match the other replicas and life goes on.

Why can't I do the same with that one brick?  Just take it out of service,
replace the brick, remount and allow it to self-heal?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list