[Bugs] [Bug 1232430] [SNAPSHOT] : Snapshot delete fails with error - Snap might not be in an usable state

bugzilla at redhat.com bugzilla at redhat.com
Fri Jul 17 07:48:02 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1232430

Richard Neuboeck <hawk at tbi.univie.ac.at> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |hawk at tbi.univie.ac.at



--- Comment #3 from Richard Neuboeck <hawk at tbi.univie.ac.at> ---
I can confirm that the 'Snap might not be in an usable state' problem exists in
glusterfs 3.7.2

Version: glusterfs-3.7.2-3.el7.x86_64 (gluster repo)
OS: CentOS 7.1 64bit

Steps to recreate:

# gluster snapshot create snap1 plexus description 'test snapshot'
snapshot create: success: Snap snap1_GMT-2015.07.16-11.16.03 created
successfully

# gluster snapshot list
snap1_GMT-2015.07.16-11.16.03

# gluster snapshot info
Snapshot                  : snap1_GMT-2015.07.16-11.16.03
Snap UUID                 : 6ddce064-2bd0-4770-9995-583147e1a35c
Description               : test snapshot
Created                   : 2015-07-16 11:16:03
Snap Volumes:

    Snap Volume Name          : e905ba76967f43efa0220c2283c87057
    Origin Volume name        : plexus
    Snaps taken for plexus      : 1
    Snaps available for plexus  : 255
    Status                    : Stopped


# gluster snapshot delete snap1_GMT-2015.07.16-11.16.03
Deleting snap will erase all the information about the snap. Do you still want
to continue? (y/n) y
snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in
an usable state.
Snapshot command failed

# gluster snapshot delete all
System contains 1 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: failed: Snapshot snap1_GMT-2015.07.16-11.16.03 might not be in
an usable state.
Snapshot command failed


Setup on the machines I've tested this:

- CentOS 7.1 minimal installation
- Thinly provisioned as follows:
# lvs --all
  LV                                 VG                Attr       LSize  Pool  
  Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  e905ba76967f43efa0220c2283c87057_0 storage_vg        Vwi-aotz-- 45.00t
thinpool thindata 0.04                                   
  [lvol0_pmspare]                    storage_vg        ewi------- 10.00g        
  thindata                           storage_vg        Vwi-aotz-- 45.00t
thinpool          0.05                                   
  thinpool                           storage_vg        twi-aotz-- 49.95t       
           0.05   0.43                            
  [thinpool_tdata]                   storage_vg        Twi-ao---- 49.95t        
  [thinpool_tmeta]                   storage_vg        ewi-ao---- 10.00g

- Gluster setup for this test consists of two machines, one brick each. The
brick is a (hardware) RAID 5 volume. Since I've got a lot of NFS related error
messages and I didn't use NFS in this case 'nfs.disable' is on.

# gluster volume info

Volume Name: plexus
Type: Replicate
Volume ID: 105559c1-c6d9-4557-8488-2197ad86d92d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: sphere-one:/srv/gluster/brick
Brick2: sphere-two:/srv/gluster/brick
Options Reconfigured:
features.barrier: disable
nfs.disable: on
performance.readdir-ahead: on

Attached are the logs from both nodes from /var/log/glusterfs.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=PVOREHgsAb&a=cc_unsubscribe


More information about the Bugs mailing list