<html><head></head><body><div class="yahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div>Hello Community,</div><div><br></div><div>I have a problem running a snapshot of a replica 3 arbiter 1 volume.</div><div><br></div><div>Error:</div><div><span><div>[root@ovirt2 ~]# gluster snapshot create before-423 engine description "Before upgrade of engine from 4.2.2 to 4.2.3" </div><div>snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV.</div><div>Snapshot command failed</div></span><br></div><div>Volume info:</div><div><br></div><div><span><div>Volume Name: engine</div><div>Type: Replicate</div><div>Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1:/gluster_bricks/engine/engine</div><div>Brick2: ovirt2:/gluster_bricks/engine/engine</div><div>Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)</div><div>Options Reconfigured:</div><div>cluster.granular-entry-heal: enable</div><div>performance.strict-o-direct: on</div><div>network.ping-timeout: 30</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>user.cifs: off</div><div>features.shard: on</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 8</div><div>cluster.locking-scheme: granular</div><div>cluster.data-self-heal-algorithm: full</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: auto</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>performance.low-prio-threads: 32</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>performance.quick-read: off</div><div>transport.address-family: inet</div><div>nfs.disable: on</div><div>performance.client-io-threads: off</div><div>cluster.enable-shared-storage: enable</div><div><br></div><div><br></div><div>All bricks are on thin lvm with plenty of space, the only thing that could be causing it is that ovirt1 & ovirt2 are on <span>/dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on <span>/dev/gluster_vg_sda3/gluster_lv_engine.</span></span></div><div><span><span><br></span></span></div><div><span><span>Is that the issue ? Should I rename my brick's VG ?</span></span></div><div><span><span>If so, why there is no mentioning in the documentation ?</span></span></div><div><span><span><br></span></span></div><div><span><span><br></span></span></div><div><span><span>Best Regards,</span></span></div><div><span><span>Strahil Nikolov</span></span></div></span><br></div></div></body></html>