[Gluster-users] Gluster snapshot fails

Strahil Nikolov hunter86_bg at yahoo.com
Thu Apr 11 08:00:31 UTC 2019


 Hi Rafi,
thanks for your update.
I have tested again with another gluster volume.[root at ovirt1 glusterfs]# gluster volume info isos

Volume Name: isos
Type: Replicate
Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/isos/isos
Brick2: ovirt2:/gluster_bricks/isos/isos
Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Command run:
logrotate -f glusterfs ; logrotate -f glusterfs-georep;  gluster snapshot create isos-snap-2019-04-11 isos  description TEST

Logs:[root at ovirt1 glusterfs]# cat cli.log
[2019-04-11 07:51:02.367453] I [cli.c:769:main] 0-cli: Started running gluster with version 5.5
[2019-04-11 07:51:02.486863] I [MSGID: 101190] [event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-04-11 07:51:02.556813] E [cli-rpc-ops.c:11293:gf_cli_snapshot] 0-cli: cli_to_glusterd for snapshot failed
[2019-04-11 07:51:02.556880] I [input.c:31:cli_batch] 0-: Exiting with: -1
[root at ovirt1 glusterfs]# cat glusterd.log
[2019-04-11 07:51:02.553357] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV.
[2019-04-11 07:51:02.553365] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed
[2019-04-11 07:51:02.553703] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed
[2019-04-11 07:51:02.553719] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node

My LVs hosting the bricks are:[root at ovirt1 ~]# lvs gluster_vg_md0
  LV              VG             Attr       LSize   Pool            Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool        35.97
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool        52.11
  my_vdo_thinpool gluster_vg_md0 twi-aot---   9.86t                        2.04   11.45

[root at ovirt1 ~]# ssh ovirt2 "lvs gluster_vg_md0"
  LV              VG             Attr       LSize   Pool            Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool        35.98
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool        25.94
  my_vdo_thinpool gluster_vg_md0 twi-aot---  <9.77t                        1.93   11.39
[root at ovirt1 ~]# ssh ovirt3 "lvs gluster_vg_sda3"
  LV                    VG              Attr       LSize  Pool                  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data       gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.17
  gluster_lv_engine     gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.16
  gluster_lv_isos       gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.12
  gluster_thinpool_sda3 gluster_vg_sda3 twi-aotz-- 41.00g                              0.16   1.58

As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs can show the reason for that pre-check failure ?
Best Regards,Strahil Nikolov


    В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chundattu Parambil <rkavunga at redhat.com> написа:  
 
 Hi Strahil,

The name of device is not at all a problem here. Can you please check the log of glusterd, and see if there is any useful information about the failure. Also please provide the output of `lvscan` and `lvs --noheadings -o pool_lv` from all nodes

Regards
Rafi KC

----- Original Message -----
From: "Strahil Nikolov" <hunter86_bg at yahoo.com>
To: gluster-users at gluster.org
Sent: Wednesday, April 10, 2019 2:36:39 AM
Subject: [Gluster-users] Gluster snapshot fails

Hello Community, 

I have a problem running a snapshot of a replica 3 arbiter 1 volume. 

Error: 
[root at ovirt2 ~]# gluster snapshot create before-423 engine description "Before upgrade of engine from 4.2.2 to 4.2.3" 
snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV. 
Snapshot command failed 

Volume info: 

Volume Name: engine 
Type: Replicate 
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 x (2 + 1) = 3 
Transport-type: tcp 
Bricks: 
Brick1: ovirt1:/gluster_bricks/engine/engine 
Brick2: ovirt2:/gluster_bricks/engine/engine 
Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter) 
Options Reconfigured: 
cluster.granular-entry-heal: enable 
performance.strict-o-direct: on 
network.ping-timeout: 30 
storage.owner-gid: 36 
storage.owner-uid: 36 
user.cifs: off 
features.shard: on 
cluster.shd-wait-qlength: 10000 
cluster.shd-max-threads: 8 
cluster.locking-scheme: granular 
cluster.data-self-heal-algorithm: full 
cluster.server-quorum-type: server 
cluster.quorum-type: auto 
cluster.eager-lock: enable 
network.remote-dio: off 
performance.low-prio-threads: 32 
performance.io-cache: off 
performance.read-ahead: off 
performance.quick-read: off 
transport.address-family: inet 
nfs.disable: on 
performance.client-io-threads: off 
cluster.enable-shared-storage: enable 


All bricks are on thin lvm with plenty of space, the only thing that could be causing it is that ovirt1 & ovirt2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine. 

Is that the issue ? Should I rename my brick's VG ? 
If so, why there is no mentioning in the documentation ? 


Best Regards, 
Strahil Nikolov 


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190411/296a0019/attachment.html>


More information about the Gluster-users mailing list