<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>After other test (I'm trying to convice myself about gluster
reliability :-) I've found that with <br>
</p>
<p>performance.write-behind off</p>
<p>the vm works without problem. Now I'll try with write-behind on
and flush-behind on too.</p>
<p><br>
</p>
<br>
<div class="moz-cite-prefix">Il 18/01/2018 13:30, Krutika Dhananjay
ha scritto:<br>
</div>
<blockquote type="cite"
cite="mid:CAPhYV8MJWczNqeq+X+eAYW2YZqUjm=4m1DSuTBQ8teJ_xJysDw@mail.gmail.com">
<div dir="ltr">
<div>Thanks for that input. Adding Niels since the issue is
reproducible only with libgfapi.<br>
<br>
</div>
-Krutika<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Jan 18, 2018 at 1:39 PM, Ing.
Luca Lazzeroni - Trend Servizi Srl <span dir="ltr"><<a
href="mailto:luca@gvnet.it" target="_blank"
moz-do-not-send="true">luca@gvnet.it</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Another update.</p>
<p>I've setup a replica 3 volume without sharding and
tried to install a VM on a qcow2 volume on that device;
however the result is the same and the vm image has been
corrupted, exactly at the same point.</p>
<p>Here's the volume info of the create volume:</p>
<p>Volume Name: gvtest<br>
Type: Replicate<br>
Volume ID: e2ddf694-ba46-4bc7-bc9c-<wbr>e30803374e9d<span
class=""><br>
Status: Started<br>
Snapshot Count: 0<br>
</span> Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gluster1:/bricks/brick1/gvtest<br>
Brick2: gluster2:/bricks/brick1/gvtest<br>
Brick3: gluster3:/bricks/brick1/gvtest<br>
Options Reconfigured:<br>
user.cifs: off<br>
features.shard: off<span class=""><br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 8<br>
cluster.locking-scheme: granular<br>
cluster.data-self-heal-<wbr>algorithm: full<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
performance.low-prio-threads: 32<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
transport.address-family: inet<br>
</span> nfs.disable: on<br>
performance.client-io-threads: off<br>
<br>
</p>
<div>
<div class="h5"> <br>
<div class="m_-592553901259022859moz-cite-prefix">Il
17/01/2018 14:51, Ing. Luca Lazzeroni - Trend
Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite">
<p>Hi,</p>
<p>after our IRC chat I've rebuilt a virtual machine
with FUSE based virtual disk. Everything worked
flawlessly.</p>
<p>Now I'm sending you the output of the requested
getfattr command on the disk image:</p>
<p># file: TestFUSE-vda.qcow2<br>
trusted.afr.dirty=<wbr>0x000000000000000000000000<br>
trusted.gfid=<wbr>0x40ffafbbe987445692bb31295fa4<wbr>0105<br>
trusted.gfid2path.<wbr>dc9dde61f0b77eab=<wbr>0x31326533323631662d373839332d<wbr>346262302d383738632d3966623765<wbr>306232336263652f54657374465553<wbr>452d7664612e71636f7732<br>
trusted.glusterfs.shard.block-<wbr>size=0x0000000004000000<br>
trusted.glusterfs.shard.file-<wbr>size=<wbr>0x00000000c1530000000000000000<wbr>0000000000000060be900000000000<wbr>000000<br>
</p>
<p>Hope this helps.</p>
<p><br>
</p>
<br>
<div class="m_-592553901259022859moz-cite-prefix">Il
17/01/2018 11:37, Ing. Luca Lazzeroni - Trend
Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite">
<p>I actually use FUSE and it works. If i try to
use "libgfapi" direct interface to gluster in
qemu-kvm, the problem appears.</p>
<p><br>
</p>
<br>
<div class="m_-592553901259022859moz-cite-prefix">Il
17/01/2018 11:35, Krutika Dhananjay ha scritto:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Really? Then which protocol exactly do
you see this issue with? libgfapi? NFS? <br>
<br>
</div>
-Krutika<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Jan 17, 2018
at 3:59 PM, Ing. Luca Lazzeroni - Trend
Servizi Srl <span dir="ltr"><<a
href="mailto:luca@gvnet.it"
target="_blank" moz-do-not-send="true">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Of course. Here's the full log.
Please, note that in FUSE mode
everything works apparently without
problems. I've installed 4 vm and
updated them without problems. <br>
</p>
<div>
<div class="m_-592553901259022859h5">
<p><br>
</p>
<br>
<div
class="m_-592553901259022859m_-2254898158214265998moz-cite-prefix">Il
17/01/2018 11:00, Krutika
Dhananjay ha scritto:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Tue, Jan 16, 2018 at 10:47
PM, Ing. Luca Lazzeroni -
Trend Servizi Srl <span
dir="ltr"><<a
href="mailto:luca@gvnet.it"
target="_blank"
moz-do-not-send="true">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF">
<p>I've made the test
with raw image format
(preallocated too) and
the corruption problem
is still there (but
without errors in
bricks' log file).</p>
<p>What does the "link"
error in bricks log
files means ? <br>
</p>
<p>I've seen the source
code looking for the
lines where it happens
and it seems a warning
(it doesn't imply a
failure).</p>
</div>
</blockquote>
<div><br>
</div>
<div>Indeed, it only
represents a transient
state when the shards are
created for the first time
and does not indicate a
failure.</div>
<div>Could you also get the
logs of the gluster fuse
mount process? It should
be under
/var/log/glusterfs of your
client machine with the
filename as a hyphenated
mount point path.</div>
<div><br>
</div>
<div>For example, if your
volume was mounted at
/mnt/glusterfs, then your
log file would be named
mnt-glusterfs.log.</div>
<div><br>
</div>
<div>-Krutika<br>
</div>
<div> <br>
</div>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF">
<div>
<div
class="m_-592553901259022859m_-2254898158214265998h5">
<p><br>
</p>
<br>
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018 17:39,
Ing. Luca
Lazzeroni - Trend
Servizi Srl ha
scritto:<br>
</div>
<blockquote
type="cite">
<p>An update:</p>
<p>I've tried, for
my tests, to
create the vm
volume as</p>
<p>qemu-img create
-f qcow2 -o
preallocation=full
gluster://gluster1/Test/Test-v<wbr>da.img 20G</p>
<p>et voila !</p>
<p>No errors at
all, neither in
bricks' log file
(the "link
failed" message
disappeared),
neither in VM
(no corruption
and installed
succesfully).</p>
<p>I'll do another
test with a
fully
preallocated raw
image.</p>
<p><br>
</p>
<br>
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018
16:31, Ing. Luca
Lazzeroni -
Trend Servizi
Srl ha scritto:<br>
</div>
<blockquote
type="cite">
<p>I've just
done all the
steps to
reproduce the
problem. <br>
</p>
<p>Tha VM volume
has been
created via
"qemu-img
create -f
qcow2
Test-vda2.qcow2
20G" on the
gluster volume
mounted via
FUSE. I've
tried also to
create the
volume with
preallocated
metadata,
which moves
the problem a
bit far away
(in time). The
volume is a
replice 3
arbiter 1
volume hosted
on XFS bricks.<br>
</p>
<p>Here are the
informations:</p>
<p>[root@ovh-ov1
bricks]#
gluster volume
info gv2a2<br>
 <br>
Volume Name:
gv2a2<br>
Type:
Replicate<br>
Volume ID:
83c84774-2068-4bfc-b0b9-3e6b93<wbr>705b9f<br>
Status:
Started<br>
Snapshot
Count: 0<br>
Number of
Bricks: 1 x (2
+ 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
gluster1:/bricks/brick2/gv2a2<br>
Brick2:
gluster3:/bricks/brick3/gv2a2<br>
Brick3:
gluster2:/bricks/arbiter_brick<wbr>_gv2a2/gv2a2
(arbiter)<br>
Options
Reconfigured:<br>
storage.owner-gid: 107<br>
storage.owner-uid: 107<br>
user.cifs: off<br>
features.shard: on<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 8<br>
cluster.locking-scheme: granular<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
performance.low-prio-threads: 32<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
transport.address-family: inet<br>
nfs.disable:
off<br>
performance.client-io-threads: off<br>
<br>
</p>
<p>/var/log/glusterfs/glusterd.lo<wbr>g:</p>
<p>[2018-01-15
14:17:50.196228]
I [MSGID:
106488]
[glusterd-handler.c:1548:__glu<wbr>sterd_handle_cli_get_volume]
0-management:
Received get
vol req<br>
[2018-01-15
14:25:09.555214]
I [MSGID:
106488]
[glusterd-handler.c:1548:__glu<wbr>sterd_handle_cli_get_volume]
0-management:
Received get
vol req<br>
</p>
<p>(empty
because today
it's
2018-01-16)</p>
<p>/var/log/glusterfs/glustershd.<wbr>log:</p>
<p>[2018-01-14
02:23:02.731245]
I
[glusterfsd-mgmt.c:1821:mgmt_g<wbr>etspec_cbk]
0-glusterfs:
No change in
volfile,continuing<br>
</p>
<p>(empty too)</p>
<p>/var/log/glusterfs/bricks/bric<wbr>k-brick2-gv2a2.log
(the
interested
volume):</p>
<p>[2018-01-16
15:14:37.809965]
I [MSGID:
115029]
[server-handshake.c:793:server<wbr>_setvolume]
0-gv2a2-server: accepted client from ovh-ov1-10302-2018/01/16-15:14<wbr>:37:790306-gv2a2-client-0-0-0
(version:
3.12.4)<br>
[2018-01-16
15:16:41.471751]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
failed<br>
[2018-01-16
15:16:41.471745]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
->Â
/bricks/brick2/gv2a2/.glusterf<wbr>s/a0/14/a0144df3-8d89-4aed-872<wbr>e-5fef141e9e1efailedÂ
[File exists]<br>
[2018-01-16
15:16:42.593392]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/eb/04/eb044e6e-3a23-40a4-9ce<wbr>1-f13af148eb67failedÂ
[File exists]<br>
[2018-01-16
15:16:42.593426]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
failed<br>
[2018-01-16
15:17:04.129593]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/dc/92/dc92bd0a-0d46-4826-a4c<wbr>9-d073a924dd8dfailedÂ
[File exists]<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/dc/92/dc92bd0a-0d46-4826-a4c<wbr>9-d073a924dd8dfailedÂ
[File exists]"
repeated 5
times between
[2018-01-16
15:17:04.129593]
and
[2018-01-16
15:17:04.129593]<br>
[2018-01-16
15:17:04.129661]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
failed<br>
[2018-01-16
15:17:08.279162]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-b87<wbr>4-041820ca8241failedÂ
[File exists]<br>
[2018-01-16
15:17:08.279162]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-b87<wbr>4-041820ca8241failedÂ
[File exists]<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-b87<wbr>4-041820ca8241failedÂ
[File exists]"
repeated 2
times between
[2018-01-16
15:17:08.279162]
and
[2018-01-16
15:17:08.279162]</p>
<p>[2018-01-16
15:17:08.279177]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
failed<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/a0/14/a0144df3-8d89-4aed-872<wbr>e-5fef141e9e1efailedÂ
[File exists]"
repeated 6
times between
[2018-01-16
15:16:41.471745]
and
[2018-01-16
15:16:41.471807]<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/eb/04/eb044e6e-3a23-40a4-9ce<wbr>1-f13af148eb67failedÂ
[File exists]"
repeated 2
times between
[2018-01-16
15:16:42.593392]
and
[2018-01-16
15:16:42.593430]<br>
[2018-01-16
15:17:32.229689]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/53/04/530449fa-d698-4928-a26<wbr>2-9a0234232323failedÂ
[File exists]<br>
[2018-01-16
15:17:32.229720]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
failed<br>
[2018-01-16
15:18:07.154330]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/81/96/8196dd19-84bc-4c3d-909<wbr>f-8792e9b4929dfailedÂ
[File exists]<br>
[2018-01-16
15:18:07.154375]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
failed<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/53/04/530449fa-d698-4928-a26<wbr>2-9a0234232323failedÂ
[File exists]"
repeated 7
times between
[2018-01-16
15:17:32.229689]
and
[2018-01-16
15:17:32.229806]<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/81/96/8196dd19-84bc-4c3d-909<wbr>f-8792e9b4929dfailedÂ
[File exists]"
repeated 3
times between
[2018-01-16
15:18:07.154330]
and
[2018-01-16
15:18:07.154357]<br>
[2018-01-16
15:19:23.618794]
W [MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/6d/02/6d02bd98-83de-43e8-a7a<wbr>f-b1d5f5160403failedÂ
[File exists]<br>
[2018-01-16
15:19:23.618827]
E [MSGID:
113020]
[posix.c:1485:posix_mknod]
0-gv2a2-posix:
setting gfid
on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
failed<br>
The message "W
[MSGID:
113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix:
link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/6d/02/6d02bd98-83de-43e8-a7a<wbr>f-b1d5f5160403failedÂ
[File exists]"
repeated 3
times between
[2018-01-16
15:19:23.618794]
and
[2018-01-16
15:19:23.618794]<br>
<br>
</p>
<p>Thank you,<br>
</p>
<br>
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018
11:40, Krutika
Dhananjay ha
scritto:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">
<div>
<div>
<div>Also to
help isolate
the component,
could you
answer these:<br>
<br>
</div>
1. on a
different
volume with
shard not
enabled, do
you see this
issue?<br>
</div>
2. on a plain
3-way
replicated
volume (no
arbiter), do
you see this
issue?<br>
<br>
</div>
<br>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Tue, Jan 16,
2018 at 4:03
PM, Krutika
Dhananjay <span
dir="ltr"><<a
href="mailto:kdhananj@redhat.com" target="_blank" moz-do-not-send="true">kdhananj@redhat.com</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>Please
share the
volume-info
output and the
logs under
/var/log/glusterfs/
from all your
nodes. for
investigating
the issue.</div>
<span
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927HOEnZb"><font
color="#888888">
<div><br>
</div>
-Krutika<br>
</font></span></div>
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927HOEnZb">
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927h5">
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Tue, Jan 16,
2018 at 1:30
PM, Ing. Luca
Lazzeroni -
Trend Servizi
Srl <span
dir="ltr"><<a
href="mailto:luca@gvnet.it" target="_blank" moz-do-not-send="true">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
to everyone.<br>
<br>
I've got a
strange
problem with a
gluster setup:
3 nodes with
Centos 7.4,
Gluster 3.12.4
from
Centos/Gluster
repositories,
QEMU-KVM
version 2.9.0
(compiled from
RHEL sources).<br>
<br>
I'm running
volumes in
replica 3
arbiter 1 mode
(but I've got
a volume in
"pure" replica
3 mode too).
I've applied
the "virt"
group settings
to my volumes
since they
host VM
images.<br>
<br>
If I try to
install
something (eg:
Ubuntu Server
16.04.3) on a
VM (and so I
generate a bit
of I/O inside
it) and
configure KVM
to access
gluster volume
directly (via
libvirt),
install fails
after a while
because the
disk content
is corrupted.
If I inspect
the block
inside the
disk (by
accessing the
image directly
from outside)
I can found
many files
filled with
"^@".<br>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Also,
what exactly
do you mean by
accessing the
image directly
from outside?
Was it from
the brick
directories
directly? Was
it from the
mount point of
the volume?
Could you
elaborate?
Which files
exactly did
you check?<br>
</div>
<div><br>
</div>
<div>-Krutika</div>
<div><br>
</div>
<blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927HOEnZb">
<div
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927h5">
<div
class="gmail_extra">
<div
class="gmail_quote">
<blockquote
class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
If, instead, I
configure KVM
to access VM
images via a
FUSE mount,
everything
seems to work
correctly.<br>
<br>
Note that the
problem with
install is
verified 100%
time with
QCOW2 image,
while it
appears only
after with RAW
disk images.<br>
<br>
Is there
anyone who
experienced
the same
problem ?<br>
<br>
Thank you,<span
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927m_-4139169106555235646HOEnZb"><font
color="#888888"><br>
<br>
<br>
-- <br>
Ing. Luca
Lazzeroni<br>
Responsabile
Ricerca e
Sviluppo<br>
Trend Servizi
Srl<br>
Tel:
0376/631761<br>
Web: <a
href="https://www.trendservizi.it"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a><br>
<br>
______________________________<wbr>_________________<br>
Gluster-users
mailing list<br>
<a
href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<pre class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset
class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a
href="mailto:Gluster-users@gluster.org"
target="_blank"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer"
target="_blank"
moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<pre class="m_-592553901259022859m_-2254898158214265998moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859m_-2254898158214265998moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<pre class="m_-592553901259022859moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset
class="m_-592553901259022859mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-592553901259022859moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="m_-592553901259022859moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" moz-do-not-send="true">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-592553901259022859moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset
class="m_-592553901259022859mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-592553901259022859moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="m_-592553901259022859moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" moz-do-not-send="true">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-592553901259022859moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-592553901259022859moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org"
moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="moz-txt-link-freetext" href="https://www.trendservizi.it">https://www.trendservizi.it</a></pre>
</body>
</html>