[Bugs] [Bug 1322145] Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot

bugzilla at redhat.com bugzilla at redhat.com
Tue Mar 14 05:48:55 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1322145

Avra Sengupta <asengupt at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(asengupt at redhat.c |
                   |om)                         |



--- Comment #18 from Avra Sengupta <asengupt at redhat.com> ---
Well, that is the most likely scenario to happen. If we disallow it, we force
them to stay put with that peer for good, or delete all those snapshots(which
would be unusable anyway if we did do peer detach).

As you suggested, we can tell the user that the peer is hosting snapshot bricks
and therefore cannot be detached. It is similar to how we do not allow deletion
of a volume, if it still has snapshots in it.

This solution is still not a holistic one, as it forces the user to delete all
his snapshots. But it is still the best one we got yet.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=lbBBxfSdcu&a=cc_unsubscribe


More information about the Bugs mailing list