[Gluster-users] snapshot removal failed on one node how to recover (3.7.11)
Alastair Neil
ajneil.tech at gmail.com
Tue Jun 7 00:02:43 UTC 2016
No one has any suggestions? Would this scenario I have been toying with
work: remove the brick from the node with the out of sync snapshots,
destroy all associated logical volumes, and then add the brick back as an
arbiter node?
On 1 June 2016 at 13:40, Alastair Neil <ajneil.tech at gmail.com> wrote:
> I have a replica 3 volume that has snapshot scheduled using
> snap_scheduler.py
>
> I recently tried to remove a snapshot and the command failed on one node:
>
> snapshot delete: failed: Commit failed on gluster0.vsnet.gmu.edu. Please
>> check log file for details.
>> Snapshot command failed
>
>
> How do I recover from this failure. Clearly I need to remove the snapshot
> from the offending server but this does not seem possible as the snapshot
> no longer exists on the other two nodes.
> Suggestions welcome.
>
> -Alastair
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160606/d12a9be4/attachment.html>
More information about the Gluster-users
mailing list