[Gluster-users] Gluster snapshot fails
Rafi Kavungal Chundattu Parambil
rkavunga at redhat.com
Wed Apr 10 13:05:01 UTC 2019
Hi Strahil,
The name of device is not at all a problem here. Can you please check the log of glusterd, and see if there is any useful information about the failure. Also please provide the output of `lvscan` and `lvs --noheadings -o pool_lv` from all nodes
Regards
Rafi KC
----- Original Message -----
From: "Strahil Nikolov" <hunter86_bg at yahoo.com>
To: gluster-users at gluster.org
Sent: Wednesday, April 10, 2019 2:36:39 AM
Subject: [Gluster-users] Gluster snapshot fails
Hello Community,
I have a problem running a snapshot of a replica 3 arbiter 1 volume.
Error:
[root at ovirt2 ~]# gluster snapshot create before-423 engine description "Before upgrade of engine from 4.2.2 to 4.2.3"
snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV.
Snapshot command failed
Volume info:
Volume Name: engine
Type: Replicate
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/engine/engine
Brick2: ovirt2:/gluster_bricks/engine/engine
Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
All bricks are on thin lvm with plenty of space, the only thing that could be causing it is that ovirt1 & ovirt2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine.
Is that the issue ? Should I rename my brick's VG ?
If so, why there is no mentioning in the documentation ?
Best Regards,
Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list