[Gluster-users] Gluster snapshot fails

Strahil Nikolov hunter86_bg at yahoo.com
Fri Apr 12 11:44:43 UTC 2019


 I hope this is the last update on the issue -> opened a bug https://bugzilla.redhat.com/show_bug.cgi?id=1699309

Best regards,Strahil Nikolov

    В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov <hunter86_bg at yahoo.com> написа:  
 
  Hi All,
I have tested gluster snapshot without systemd.automount units and it works as follows:

[root at ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 created successfully

[root at ovirt1 system]# gluster snapshot list
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
[root at ovirt1 system]# gluster snapshot info isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snapshot                  : isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snap UUID                 : 70d5716e-4633-43d4-a562-8e29a96b0104
Description               : TEST
Created                   : 2019-04-12 11:18:24
Snap Volumes:

        Snap Volume Name          : 584e88eab0374c0582cc544a2bc4b79e
        Origin Volume name        : isos
        Snaps taken for isos      : 1
        Snaps available for isos  : 255
        Status                    : Stopped


Best Regards,Strahil Nikolov

    В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov <hunter86_bg at yahoo.com> написа:  
 
  Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd unit.
[root at ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

    В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov <hunter86_bg at yahoo.com> написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] [glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] [glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] [glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre Validation Failed

here is the output of lvscan & lvs:
[root at ovirt1 ~]# lvscan
  ACTIVE            '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE            '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE            '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE            '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root at ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root at ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE            '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE            '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE            '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root at ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE            '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE            '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via systemd , as I have issues with bricks being started before VDO.
[root at ovirt1 ~]# findmnt /gluster_bricks/isos
TARGET               SOURCE                                     FSTYPE OPTIONS
/gluster_bricks/isos systemd-1                                  autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root at ovirt1 ~]# ssh ovirt2 "findmnt /gluster_bricks/isos "
TARGET               SOURCE                                     FSTYPE OPTIONS
/gluster_bricks/isos systemd-1                                  autofs rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14279
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root at ovirt1 ~]# ssh ovirt3 "findmnt /gluster_bricks/isos "
TARGET               SOURCE                                      FSTYPE OPTIONS
/gluster_bricks/isos systemd-1                                   autofs rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17770
/gluster_bricks/isos /dev/mapper/gluster_vg_sda3-gluster_lv_isos xfs    rw,noatime,nodiratime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=1024,noquota


[root at ovirt1 ~]# grep "gluster_bricks" /proc/mounts
systemd-1 /gluster_bricks/data autofs rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21513 0 0
systemd-1 /gluster_bricks/engine autofs rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21735 0 0
systemd-1 /gluster_bricks/isos autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843 0 0
/dev/mapper/gluster_vg_ssd-gluster_lv_engine /gluster_bricks/engine xfs rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=256,swidth=256,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_isos /gluster_bricks/isos xfs rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_data /gluster_bricks/data xfs rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0




Obviously , gluster is catching "systemd-1" as a device and tries to check if it's a thin LV.Where should I open a bug for that ?
P.S.: Adding oVirt User list.

Best Regards,Strahil Nikolov


    В четвъртък, 11 април 2019 г., 4:00:31 ч. Гринуич-4, Strahil Nikolov <hunter86_bg at yahoo.com> написа:  
 
   Hi Rafi,
thanks for your update.
I have tested again with another gluster volume.[root at ovirt1 glusterfs]# gluster volume info isos

Volume Name: isos
Type: Replicate
Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/isos/isos
Brick2: ovirt2:/gluster_bricks/isos/isos
Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Command run:
logrotate -f glusterfs ; logrotate -f glusterfs-georep;  gluster snapshot create isos-snap-2019-04-11 isos  description TEST

Logs:[root at ovirt1 glusterfs]# cat cli.log
[2019-04-11 07:51:02.367453] I [cli.c:769:main] 0-cli: Started running gluster with version 5.5
[2019-04-11 07:51:02.486863] I [MSGID: 101190] [event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-04-11 07:51:02.556813] E [cli-rpc-ops.c:11293:gf_cli_snapshot] 0-cli: cli_to_glusterd for snapshot failed
[2019-04-11 07:51:02.556880] I [input.c:31:cli_batch] 0-: Exiting with: -1
[root at ovirt1 glusterfs]# cat glusterd.log
[2019-04-11 07:51:02.553357] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV.
[2019-04-11 07:51:02.553365] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed
[2019-04-11 07:51:02.553703] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed
[2019-04-11 07:51:02.553719] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node

My LVs hosting the bricks are:[root at ovirt1 ~]# lvs gluster_vg_md0
  LV              VG             Attr       LSize   Pool            Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool        35.97
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool        52.11
  my_vdo_thinpool gluster_vg_md0 twi-aot---   9.86t                        2.04   11.45

[root at ovirt1 ~]# ssh ovirt2 "lvs gluster_vg_md0"
  LV              VG             Attr       LSize   Pool            Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool        35.98
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool        25.94
  my_vdo_thinpool gluster_vg_md0 twi-aot---  <9.77t                        1.93   11.39
[root at ovirt1 ~]# ssh ovirt3 "lvs gluster_vg_sda3"
  LV                    VG              Attr       LSize  Pool                  Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data       gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.17
  gluster_lv_engine     gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.16
  gluster_lv_isos       gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3        0.12
  gluster_thinpool_sda3 gluster_vg_sda3 twi-aotz-- 41.00g                              0.16   1.58

As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs can show the reason for that pre-check failure ?
Best Regards,Strahil Nikolov


    В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chundattu Parambil <rkavunga at redhat.com> написа:  
 
 Hi Strahil,

The name of device is not at all a problem here. Can you please check the log of glusterd, and see if there is any useful information about the failure. Also please provide the output of `lvscan` and `lvs --noheadings -o pool_lv` from all nodes

Regards
Rafi KC

----- Original Message -----
From: "Strahil Nikolov" <hunter86_bg at yahoo.com>
To: gluster-users at gluster.org
Sent: Wednesday, April 10, 2019 2:36:39 AM
Subject: [Gluster-users] Gluster snapshot fails

Hello Community, 

I have a problem running a snapshot of a replica 3 arbiter 1 volume. 

Error: 
[root at ovirt2 ~]# gluster snapshot create before-423 engine description "Before upgrade of engine from 4.2.2 to 4.2.3" 
snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV. 
Snapshot command failed 

Volume info: 

Volume Name: engine 
Type: Replicate 
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 x (2 + 1) = 3 
Transport-type: tcp 
Bricks: 
Brick1: ovirt1:/gluster_bricks/engine/engine 
Brick2: ovirt2:/gluster_bricks/engine/engine 
Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter) 
Options Reconfigured: 
cluster.granular-entry-heal: enable 
performance.strict-o-direct: on 
network.ping-timeout: 30 
storage.owner-gid: 36 
storage.owner-uid: 36 
user.cifs: off 
features.shard: on 
cluster.shd-wait-qlength: 10000 
cluster.shd-max-threads: 8 
cluster.locking-scheme: granular 
cluster.data-self-heal-algorithm: full 
cluster.server-quorum-type: server 
cluster.quorum-type: auto 
cluster.eager-lock: enable 
network.remote-dio: off 
performance.low-prio-threads: 32 
performance.io-cache: off 
performance.read-ahead: off 
performance.quick-read: off 
transport.address-family: inet 
nfs.disable: on 
performance.client-io-threads: off 
cluster.enable-shared-storage: enable 


All bricks are on thin lvm with plenty of space, the only thing that could be causing it is that ovirt1 & ovirt2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine. 

Is that the issue ? Should I rename my brick's VG ? 
If so, why there is no mentioning in the documentation ? 


Best Regards, 
Strahil Nikolov 


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
        
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190412/0b6b7402/attachment-0001.html>


More information about the Gluster-users mailing list