[Bugs] [Bug 1322145] Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
bugzilla at redhat.com
bugzilla at redhat.com
Wed Mar 22 17:48:47 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1322145
Ben Werthmann <ben at apcera.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Flags| |needinfo?(gyadav at redhat.com
| |)
--- Comment #28 from Ben Werthmann <ben at apcera.com> ---
(In reply to Gaurav Yadav from comment #24)
>
> Could you please mention the other failure where you have concerns
Performing replace-brick operations resolves entering states:
1. Total system failure - server/instance is "terminated"
2. A server running Gluster enters an unrecoverable error state and must be
replaced to recover the cluster from a degraded state (case: replica 3
volumes).
In the case of 2, generally, a LVM thin-pool (thin data lv, and snapshot lvs)
enters a read-only state because thinpool's metadata LV has exhausted and fails
to extend [1]. Gluster ignores "tpool_metadata is at low water mark" events,
and continues to create snapshots.
[1] I discussed this issue with Zdenek Kabelac. The issue is due to older
kernel dm-thin support and/or older versions of the userspace lvmtools.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=stMsH0nkGD&a=cc_unsubscribe
More information about the Bugs
mailing list