[Bugs] [Bug 1322145] Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
bugzilla at redhat.com
bugzilla at redhat.com
Mon Jul 25 06:19:57 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1322145
Atin Mukherjee <amukherj at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |rjoseph at redhat.com
Flags| |needinfo?(rjoseph at redhat.co
| |m)
--- Comment #11 from Atin Mukherjee <amukherj at redhat.com> ---
OK, so this is happening as per the current functionality. When a replace brick
is issued, the op is restricted to that same volume. No other references get
changed here. As snapshot works just like a volume, if the snapshot is
referring to a failed peer which has been already been replaced, glusterd will
fail to restore the snap here. I don't think we have any other option but to
change the IP in the volfile (rename) of the snap and make that work.
Rajesh,
What's your thought here?
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=2OBPzLdMTM&a=cc_unsubscribe
More information about the Bugs
mailing list