<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>I've made the test with raw image format (preallocated too) and
the corruption problem is still there (but without errors in
bricks' log file).</p>
<p>What does the "link" error in bricks log files means ? <br>
</p>
<p>I've seen the source code looking for the lines where it happens
and it seems a warning (it doesn't imply a failure).</p>
<p><br>
</p>
<br>
<div class="moz-cite-prefix">Il 16/01/2018 17:39, Ing. Luca
Lazzeroni - Trend Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite"
cite="mid:7405d200-8ec6-04df-2462-468299334070@gvnet.it">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<p>An update:</p>
<p>I've tried, for my tests, to create the vm volume as</p>
<p>qemu-img create -f qcow2 -o preallocation=full
gluster://gluster1/Test/Test-vda.img 20G</p>
<p>et voila !</p>
<p>No errors at all, neither in bricks' log file (the "link
failed" message disappeared), neither in VM (no corruption and
installed succesfully).</p>
<p>I'll do another test with a fully preallocated raw image.</p>
<p><br>
</p>
<br>
<div class="moz-cite-prefix">Il 16/01/2018 16:31, Ing. Luca
Lazzeroni - Trend Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite"
cite="mid:20c4d5b9-4733-1465-ae4d-58528394a46f@gvnet.it">
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8">
<p>I've just done all the steps to reproduce the problem. <br>
</p>
<p>Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE.
I've tried also to create the volume with preallocated
metadata, which moves the problem a bit far away (in time).
The volume is a replice 3 arbiter 1 volume hosted on XFS
bricks.<br>
</p>
<p>Here are the informations:</p>
<p>[root@ovh-ov1 bricks]# gluster volume info gv2a2<br>
<br>
Volume Name: gv2a2<br>
Type: Replicate<br>
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gluster1:/bricks/brick2/gv2a2<br>
Brick2: gluster3:/bricks/brick3/gv2a2<br>
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)<br>
Options Reconfigured:<br>
storage.owner-gid: 107<br>
storage.owner-uid: 107<br>
user.cifs: off<br>
features.shard: on<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 8<br>
cluster.locking-scheme: granular<br>
cluster.data-self-heal-algorithm: full<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
performance.low-prio-threads: 32<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
transport.address-family: inet<br>
nfs.disable: off<br>
performance.client-io-threads: off<br>
<br>
</p>
<p>/var/log/glusterfs/glusterd.log:</p>
<p>[2018-01-15 14:17:50.196228] I [MSGID: 106488]
[glusterd-handler.c:1548:__glusterd_handle_cli_get_volume]
0-management: Received get vol req<br>
[2018-01-15 14:25:09.555214] I [MSGID: 106488]
[glusterd-handler.c:1548:__glusterd_handle_cli_get_volume]
0-management: Received get vol req<br>
</p>
<p>(empty because today it's 2018-01-16)</p>
<p>/var/log/glusterfs/glustershd.log:</p>
<p>[2018-01-14 02:23:02.731245] I
[glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No
change in volfile,continuing<br>
</p>
<p>(empty too)</p>
<p>/var/log/glusterfs/bricks/brick-brick2-gv2a2.log (the
interested volume):</p>
<p>[2018-01-16 15:14:37.809965] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume] 0-gv2a2-server:
accepted client from
ovh-ov1-10302-2018/01/16-15:14:37:790306-gv2a2-client-0-0-0
(version: 3.12.4)<br>
[2018-01-16 15:16:41.471751] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
failed<br>
[2018-01-16 15:16:41.471745] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
->
/bricks/brick2/gv2a2/.glusterfs/a0/14/a0144df3-8d89-4aed-872e-5fef141e9e1efailed
[File exists]<br>
[2018-01-16 15:16:42.593392] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
->
/bricks/brick2/gv2a2/.glusterfs/eb/04/eb044e6e-3a23-40a4-9ce1-f13af148eb67failed
[File exists]<br>
[2018-01-16 15:16:42.593426] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
failed<br>
[2018-01-16 15:17:04.129593] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
->
/bricks/brick2/gv2a2/.glusterfs/dc/92/dc92bd0a-0d46-4826-a4c9-d073a924dd8dfailed
[File exists]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
->
/bricks/brick2/gv2a2/.glusterfs/dc/92/dc92bd0a-0d46-4826-a4c9-d073a924dd8dfailed
[File exists]" repeated 5 times between [2018-01-16
15:17:04.129593] and [2018-01-16 15:17:04.129593]<br>
[2018-01-16 15:17:04.129661] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.8
failed<br>
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]<br>
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
->
/bricks/brick2/gv2a2/.glusterfs/c9/b7/c9b71b00-a09f-4df1-b874-041820ca8241failed
[File exists]" repeated 2 times between [2018-01-16
15:17:08.279162] and [2018-01-16 15:17:08.279162]</p>
<p>[2018-01-16 15:17:08.279177] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.9
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4
->
/bricks/brick2/gv2a2/.glusterfs/a0/14/a0144df3-8d89-4aed-872e-5fef141e9e1efailed
[File exists]" repeated 6 times between [2018-01-16
15:16:41.471745] and [2018-01-16 15:16:41.471807]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.5
->
/bricks/brick2/gv2a2/.glusterfs/eb/04/eb044e6e-3a23-40a4-9ce1-f13af148eb67failed
[File exists]" repeated 2 times between [2018-01-16
15:16:42.593392] and [2018-01-16 15:16:42.593430]<br>
[2018-01-16 15:17:32.229689] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
->
/bricks/brick2/gv2a2/.glusterfs/53/04/530449fa-d698-4928-a262-9a0234232323failed
[File exists]<br>
[2018-01-16 15:17:32.229720] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
failed<br>
[2018-01-16 15:18:07.154330] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
->
/bricks/brick2/gv2a2/.glusterfs/81/96/8196dd19-84bc-4c3d-909f-8792e9b4929dfailed
[File exists]<br>
[2018-01-16 15:18:07.154375] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.14
->
/bricks/brick2/gv2a2/.glusterfs/53/04/530449fa-d698-4928-a262-9a0234232323failed
[File exists]" repeated 7 times between [2018-01-16
15:17:32.229689] and [2018-01-16 15:17:32.229806]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.17
->
/bricks/brick2/gv2a2/.glusterfs/81/96/8196dd19-84bc-4c3d-909f-8792e9b4929dfailed
[File exists]" repeated 3 times between [2018-01-16
15:18:07.154330] and [2018-01-16 15:18:07.154357]<br>
[2018-01-16 15:19:23.618794] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
->
/bricks/brick2/gv2a2/.glusterfs/6d/02/6d02bd98-83de-43e8-a7af-b1d5f5160403failed
[File exists]<br>
[2018-01-16 15:19:23.618827] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.21
->
/bricks/brick2/gv2a2/.glusterfs/6d/02/6d02bd98-83de-43e8-a7af-b1d5f5160403failed
[File exists]" repeated 3 times between [2018-01-16
15:19:23.618794] and [2018-01-16 15:19:23.618794]<br>
<br>
</p>
<p>Thank you,<br>
</p>
<br>
<div class="moz-cite-prefix">Il 16/01/2018 11:40, Krutika
Dhananjay ha scritto:<br>
</div>
<blockquote type="cite"
cite="mid:CAPhYV8NxBA6kamZHp6koiQSBiSSOiLSEZfPLEd8sgKF1f=SaDg@mail.gmail.com">
<div dir="ltr">
<div>
<div>
<div>Also to help isolate the component, could you
answer these:<br>
<br>
</div>
1. on a different volume with shard not enabled, do you
see this issue?<br>
</div>
2. on a plain 3-way replicated volume (no arbiter), do you
see this issue?<br>
<br>
</div>
<br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 16, 2018 at 4:03 PM,
Krutika Dhananjay <span dir="ltr"><<a
href="mailto:kdhananj@redhat.com" target="_blank"
moz-do-not-send="true">kdhananj@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>Please share the volume-info output and the
logs under /var/log/glusterfs/ from all your
nodes. for investigating the issue.</div>
<span class="HOEnZb"><font color="#888888">
<div><br>
</div>
-Krutika<br>
</font></span></div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 16, 2018 at
1:30 PM, Ing. Luca Lazzeroni - Trend Servizi
Srl <span dir="ltr"><<a
href="mailto:luca@gvnet.it"
target="_blank" moz-do-not-send="true">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi to everyone.<br>
<br>
I've got a strange problem with a gluster
setup: 3 nodes with Centos 7.4, Gluster
3.12.4 from Centos/Gluster repositories,
QEMU-KVM version 2.9.0 (compiled from RHEL
sources).<br>
<br>
I'm running volumes in replica 3 arbiter 1
mode (but I've got a volume in "pure"
replica 3 mode too). I've applied the "virt"
group settings to my volumes since they host
VM images.<br>
<br>
If I try to install something (eg: Ubuntu
Server 16.04.3) on a VM (and so I generate a
bit of I/O inside it) and configure KVM to
access gluster volume directly (via
libvirt), install fails after a while
because the disk content is corrupted. If I
inspect the block inside the disk (by
accessing the image directly from outside) I
can found many files filled with "^@".<br>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Also, what exactly do you mean by accessing the
image directly from outside? Was it from the brick
directories directly? Was it from the mount point of
the volume? Could you elaborate? Which files exactly
did you check?<br>
</div>
<div><br>
</div>
<div>-Krutika</div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex"> <br>
If, instead, I configure KVM to access VM
images via a FUSE mount, everything seems to
work correctly.<br>
<br>
Note that the problem with install is
verified 100% time with QCOW2 image, while
it appears only after with RAW disk images.<br>
<br>
Is there anyone who experienced the same
problem ?<br>
<br>
Thank you,<span
class="m_-4139169106555235646HOEnZb"><font
color="#888888"><br>
<br>
<br>
-- <br>
Ing. Luca Lazzeroni<br>
Responsabile Ricerca e Sviluppo<br>
Trend Servizi Srl<br>
Tel: 0376/631761<br>
Web: <a
href="https://www.trendservizi.it"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://www.trendservizi.it</a><br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a
href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="http://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank"
moz-do-not-send="true">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="moz-txt-link-freetext" href="https://www.trendservizi.it" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" moz-do-not-send="true">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="moz-txt-link-freetext" href="https://www.trendservizi.it" moz-do-not-send="true">https://www.trendservizi.it</a></pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="moz-txt-link-freetext" href="https://www.trendservizi.it">https://www.trendservizi.it</a></pre>
</body>
</html>