[Bugs] [Bug 1322145] Glusterd fails to restart after replacing a failed GlusterFS node and a volume has a snapshot
bugzilla at redhat.com
bugzilla at redhat.com
Wed Mar 30 19:31:56 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1322145
--- Comment #2 from Ben Werthmann <ben at apcera.com> ---
In this case, both. We're testing recovery from a complete server failure where
the storage (brick) and compute (peer) have failed. We first 'gluster peer
probe $new_peer_ip'. Later we remove the dead peer via 'gluster peer detach
$failed_peer force', then 'gluster volume replace-brick $vol $failed_peer
$new_peer_ip:$new_brick commit force'. The 'gluster volume replace-brick'
operation exited with a non-zero exit status.
I'll build a test environment to gather the complete glusterd log file along
with cmd_history.log.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=gs9SAAaeNH&a=cc_unsubscribe
More information about the Bugs
mailing list