<div dir="ltr"><div>Really? Then which protocol exactly do you see this issue with? libgfapi? NFS? <br><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 17, 2018 at 3:59 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <span dir="ltr"><<a href="mailto:luca@gvnet.it" target="_blank">luca@gvnet.it</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Of course. Here's the full log. Please, note that in FUSE mode
everything works apparently without problems. I've installed 4 vm
and updated them without problems. <br>
</p><div><div class="h5">
<p><br>
</p>
<br>
<div class="m_-2254898158214265998moz-cite-prefix">Il 17/01/2018 11:00, Krutika Dhananjay
ha scritto:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 16, 2018 at 10:47 PM,
Ing. Luca Lazzeroni - Trend Servizi Srl <span dir="ltr"><<a href="mailto:luca@gvnet.it" target="_blank">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>I've made the test with raw image format
(preallocated too) and the corruption problem is still
there (but without errors in bricks' log file).</p>
<p>What does the "link" error in bricks log files means
? <br>
</p>
<p>I've seen the source code looking for the lines where
it happens and it seems a warning (it doesn't imply a
failure).</p>
</div>
</blockquote>
<div><br>
</div>
<div>Indeed, it only represents a transient state when the
shards are created for the first time and does not
indicate a failure.</div>
<div>Could you also get the logs of the gluster fuse mount
process? It should be under /var/log/glusterfs of your
client machine with the filename as a hyphenated mount
point path.</div>
<div><br>
</div>
<div>For example, if your volume was mounted at
/mnt/glusterfs, then your log file would be named
mnt-glusterfs.log.</div>
<div><br>
</div>
<div>-Krutika<br>
</div>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>
<div class="m_-2254898158214265998h5">
<p><br>
</p>
<br>
<div class="m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018 17:39, Ing. Luca Lazzeroni - Trend
Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite">
<p>An update:</p>
<p>I've tried, for my tests, to create the vm
volume as</p>
<p>qemu-img create -f qcow2 -o preallocation=full
gluster://gluster1/Test/Test-v<wbr>da.img 20G</p>
<p>et voila !</p>
<p>No errors at all, neither in bricks' log file
(the "link failed" message disappeared), neither
in VM (no corruption and installed succesfully).</p>
<p>I'll do another test with a fully preallocated
raw image.</p>
<p><br>
</p>
<br>
<div class="m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018 16:31, Ing. Luca Lazzeroni - Trend
Servizi Srl ha scritto:<br>
</div>
<blockquote type="cite">
<p>I've just done all the steps to reproduce the
problem. <br>
</p>
<p>Tha VM volume has been created via "qemu-img
create -f qcow2 Test-vda2.qcow2 20G" on the
gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated
metadata, which moves the problem a bit far
away (in time). The volume is a replice 3
arbiter 1 volume hosted on XFS bricks.<br>
</p>
<p>Here are the informations:</p>
<p>[root@ovh-ov1 bricks]# gluster volume info
gv2a2<br>
<br>
Volume Name: gv2a2<br>
Type: Replicate<br>
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93<wbr>705b9f<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: gluster1:/bricks/brick2/gv2a2<br>
Brick2: gluster3:/bricks/brick3/gv2a2<br>
Brick3: gluster2:/bricks/arbiter_brick<wbr>_gv2a2/gv2a2
(arbiter)<br>
Options Reconfigured:<br>
storage.owner-gid: 107<br>
storage.owner-uid: 107<br>
user.cifs: off<br>
features.shard: on<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 8<br>
cluster.locking-scheme: granular<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
performance.low-prio-threads: 32<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
transport.address-family: inet<br>
nfs.disable: off<br>
performance.client-io-threads: off<br>
<br>
</p>
<p>/var/log/glusterfs/glusterd.lo<wbr>g:</p>
<p>[2018-01-15 14:17:50.196228] I [MSGID:
106488] [glusterd-handler.c:1548:__glu<wbr>sterd_handle_cli_get_volume]
0-management: Received get vol req<br>
[2018-01-15 14:25:09.555214] I [MSGID: 106488]
[glusterd-handler.c:1548:__glu<wbr>sterd_handle_cli_get_volume]
0-management: Received get vol req<br>
</p>
<p>(empty because today it's 2018-01-16)</p>
<p>/var/log/glusterfs/glustershd.<wbr>log:</p>
<p>[2018-01-14 02:23:02.731245] I
[glusterfsd-mgmt.c:1821:mgmt_g<wbr>etspec_cbk]
0-glusterfs: No change in volfile,continuing<br>
</p>
<p>(empty too)</p>
<p>/var/log/glusterfs/bricks/bric<wbr>k-brick2-gv2a2.log
(the interested volume):</p>
<p>[2018-01-16 15:14:37.809965] I [MSGID:
115029] [server-handshake.c:793:server<wbr>_setvolume]
0-gv2a2-server: accepted client from
ovh-ov1-10302-2018/01/16-15:14<wbr>:37:790306-gv2a2-client-0-0-0
(version: 3.12.4)<br>
[2018-01-16 15:16:41.471751] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
failed<br>
[2018-01-16 15:16:41.471745] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/a0/14/a0144df3-8d89-4aed-<wbr>872e-5fef141e9e1efailed
[File exists]<br>
[2018-01-16 15:16:42.593392] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/eb/04/eb044e6e-3a23-40a4-<wbr>9ce1-f13af148eb67failed
[File exists]<br>
[2018-01-16 15:16:42.593426] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
failed<br>
[2018-01-16 15:17:04.129593] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/dc/92/dc92bd0a-0d46-4826-<wbr>a4c9-d073a924dd8dfailed
[File exists]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/dc/92/dc92bd0a-0d46-4826-<wbr>a4c9-d073a924dd8dfailed
[File exists]" repeated 5 times between
[2018-01-16 15:17:04.129593] and [2018-01-16
15:17:04.129593]<br>
[2018-01-16 15:17:04.129661] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.8
failed<br>
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-<wbr>b874-041820ca8241failed
[File exists]<br>
[2018-01-16 15:17:08.279162] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-<wbr>b874-041820ca8241failed
[File exists]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/c9/b7/c9b71b00-a09f-4df1-<wbr>b874-041820ca8241failed
[File exists]" repeated 2 times between
[2018-01-16 15:17:08.279162] and [2018-01-16
15:17:08.279162]</p>
<p>[2018-01-16 15:17:08.279177] E [MSGID:
113020] [posix.c:1485:posix_mknod]
0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.9
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.4
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/a0/14/a0144df3-8d89-4aed-<wbr>872e-5fef141e9e1efailed
[File exists]" repeated 6 times between
[2018-01-16 15:16:41.471745] and [2018-01-16
15:16:41.471807]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.5
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/eb/04/eb044e6e-3a23-40a4-<wbr>9ce1-f13af148eb67failed
[File exists]" repeated 2 times between
[2018-01-16 15:16:42.593392] and [2018-01-16
15:16:42.593430]<br>
[2018-01-16 15:17:32.229689] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/53/04/530449fa-d698-4928-<wbr>a262-9a0234232323failed
[File exists]<br>
[2018-01-16 15:17:32.229720] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
failed<br>
[2018-01-16 15:18:07.154330] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/81/96/8196dd19-84bc-4c3d-<wbr>909f-8792e9b4929dfailed
[File exists]<br>
[2018-01-16 15:18:07.154375] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.14
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/53/04/530449fa-d698-4928-<wbr>a262-9a0234232323failed
[File exists]" repeated 7 times between
[2018-01-16 15:17:32.229689] and [2018-01-16
15:17:32.229806]<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.17
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/81/96/8196dd19-84bc-4c3d-<wbr>909f-8792e9b4929dfailed
[File exists]" repeated 3 times between
[2018-01-16 15:18:07.154330] and [2018-01-16
15:18:07.154357]<br>
[2018-01-16 15:19:23.618794] W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/6d/02/6d02bd98-83de-43e8-<wbr>a7af-b1d5f5160403failed
[File exists]<br>
[2018-01-16 15:19:23.618827] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix:
setting gfid on /bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
failed<br>
The message "W [MSGID: 113096]
[posix-handle.c:770:posix_hand<wbr>le_hard]
0-gv2a2-posix: link
/bricks/brick2/gv2a2/.shard/62<wbr>335cb9-c7b5-4735-a879-59cff93f<wbr>e622.21
->
/bricks/brick2/gv2a2/.glusterf<wbr>s/6d/02/6d02bd98-83de-43e8-<wbr>a7af-b1d5f5160403failed
[File exists]" repeated 3 times between
[2018-01-16 15:19:23.618794] and [2018-01-16
15:19:23.618794]<br>
<br>
</p>
<p>Thank you,<br>
</p>
<br>
<div class="m_-2254898158214265998m_-2427999713053267927moz-cite-prefix">Il
16/01/2018 11:40, Krutika Dhananjay ha
scritto:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>Also to help isolate the component,
could you answer these:<br>
<br>
</div>
1. on a different volume with shard not
enabled, do you see this issue?<br>
</div>
2. on a plain 3-way replicated volume (no
arbiter), do you see this issue?<br>
<br>
</div>
<br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 16,
2018 at 4:03 PM, Krutika Dhananjay <span dir="ltr"><<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>Please share the volume-info
output and the logs under
/var/log/glusterfs/ from all your
nodes. for investigating the
issue.</div>
<span class="m_-2254898158214265998m_-2427999713053267927HOEnZb"><font color="#888888">
<div><br>
</div>
-Krutika<br>
</font></span></div>
<div class="m_-2254898158214265998m_-2427999713053267927HOEnZb">
<div class="m_-2254898158214265998m_-2427999713053267927h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue,
Jan 16, 2018 at 1:30 PM, Ing.
Luca Lazzeroni - Trend Servizi
Srl <span dir="ltr"><<a href="mailto:luca@gvnet.it" target="_blank">luca@gvnet.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi
to everyone.<br>
<br>
I've got a strange problem
with a gluster setup: 3
nodes with Centos 7.4,
Gluster 3.12.4 from
Centos/Gluster repositories,
QEMU-KVM version 2.9.0
(compiled from RHEL
sources).<br>
<br>
I'm running volumes in
replica 3 arbiter 1 mode
(but I've got a volume in
"pure" replica 3 mode too).
I've applied the "virt"
group settings to my volumes
since they host VM images.<br>
<br>
If I try to install
something (eg: Ubuntu Server
16.04.3) on a VM (and so I
generate a bit of I/O inside
it) and configure KVM to
access gluster volume
directly (via libvirt),
install fails after a while
because the disk content is
corrupted. If I inspect the
block inside the disk (by
accessing the image directly
from outside) I can found
many files filled with "^@".<br>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Also, what exactly do you mean by
accessing the image directly from
outside? Was it from the brick
directories directly? Was it from the
mount point of the volume? Could you
elaborate? Which files exactly did you
check?<br>
</div>
<div><br>
</div>
<div>-Krutika</div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="m_-2254898158214265998m_-2427999713053267927HOEnZb">
<div class="m_-2254898158214265998m_-2427999713053267927h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
If, instead, I configure KVM
to access VM images via a
FUSE mount, everything seems
to work correctly.<br>
<br>
Note that the problem with
install is verified 100%
time with QCOW2 image, while
it appears only after with
RAW disk images.<br>
<br>
Is there anyone who
experienced the same problem
?<br>
<br>
Thank you,<span class="m_-2254898158214265998m_-2427999713053267927m_-4139169106555235646HOEnZb"><font color="#888888"><br>
<br>
<br>
-- <br>
Ing. Luca Lazzeroni<br>
Responsabile Ricerca e
Sviluppo<br>
Trend Servizi Srl<br>
Tel: 0376/631761<br>
Web: <a href="https://www.trendservizi.it" rel="noreferrer" target="_blank">https://www.trendservizi.it</a><br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing
list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<pre class="m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank">https://www.trendservizi.it</a></pre>
<br>
<fieldset class="m_-2254898158214265998m_-2427999713053267927mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank">https://www.trendservizi.it</a></pre>
<br>
<fieldset class="m_-2254898158214265998m_-2427999713053267927mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<pre class="m_-2254898158214265998m_-2427999713053267927moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-2254898158214265998m_-2427999713053267927moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank">https://www.trendservizi.it</a></pre>
</div>
</div>
</div>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<pre class="m_-2254898158214265998moz-signature" cols="72">--
Ing. Luca Lazzeroni
Responsabile Ricerca e Sviluppo
Trend Servizi Srl
Tel: 0376/631761
Web: <a class="m_-2254898158214265998moz-txt-link-freetext" href="https://www.trendservizi.it" target="_blank">https://www.trendservizi.it</a></pre>
</div></div></div>
</blockquote></div><br></div>