<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 15, 2018 at 2:54 AM, Walter Deignan <span dir="ltr">&lt;<a href="mailto:WDeignan@uline.com" target="_blank">WDeignan@uline.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span style="font-size:12pt;font-family:Arial">I am using gluster to
host KVM/QEMU images. I am seeing an intermittent issue where access to
an image will hang. I have to do a lazy dismount of the gluster volume
in order to break the lock and then reset the impacted virtual machine.</span>
<br>
<br><span style="font-size:12pt;font-family:Arial">It happened again today
and I caught the events below in the client side logs. Any thoughts on
what might cause this? It seemed to begin after I upgraded from 3.12.10
to 4.1.1 a few weeks ago.</span>
<br>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:22:15.549501]
E [MSGID: 114031] [client-rpc-fops_v2.c:1352:<wbr>client4_0_finodelk_cbk] 2-gv1-client-4:
remote operation failed [Invalid argument]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:22:15.549576]
E [MSGID: 114031] [client-rpc-fops_v2.c:1352:<wbr>client4_0_finodelk_cbk] 2-gv1-client-5:
remote operation failed [Invalid argument]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:22:15.549583]
E [MSGID: 108010] [afr-lk-common.c:284:afr_<wbr>unlock_inodelk_cbk] 2-gv1-replicate-2:
path=(null) gfid=00000000-0000-0000-0000-<wbr>000000000000: unlock failed on
subvolume gv1-client-4 with lock owner d89caca92b7f0000 [Invalid argument]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:22:15.549615]
E [MSGID: 108010] [afr-lk-common.c:284:afr_<wbr>unlock_inodelk_cbk] 2-gv1-replicate-2:
path=(null) gfid=00000000-0000-0000-0000-<wbr>000000000000: unlock failed on
subvolume gv1-client-5 with lock owner d89caca92b7f0000 [Invalid argument]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:52:18.726219]
E [rpc-clnt.c:184:call_bail] 2-gv1-client-4: bailing out frame type(GlusterFS
4.x v1) op(FINODELK(30)) xid = 0xc5e00 sent = 2018-08-14 14:22:15.699082.
timeout = 1800 for <a href="http://10.35.20.106:49159" target="_blank">10.35.20.106:49159</a></span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 14:52:18.726254]
E [MSGID: 114031] [client-rpc-fops_v2.c:1352:<wbr>client4_0_finodelk_cbk] 2-gv1-client-4:
remote operation failed [Transport endpoint is not connected]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 15:22:25.962546]
E [rpc-clnt.c:184:call_bail] 2-gv1-client-5: bailing out frame type(GlusterFS
4.x v1) op(FINODELK(30)) xid = 0xc4a6d sent = 2018-08-14 14:52:18.726329.
timeout = 1800 for <a href="http://10.35.20.107:49164" target="_blank">10.35.20.107:49164</a></span>
<br></blockquote><div><br></div><div><br></div><div>Hi Walter,</div><div><br></div><div>Do you see any warning or error on brick logs around this time?</div><div><br></div><div>Regards,</div><div>Amar</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span style="font-size:12pt;font-family:Arial">[2018-08-14 15:22:25.962587]
E [MSGID: 114031] [client-rpc-fops_v2.c:1352:<wbr>client4_0_finodelk_cbk] 2-gv1-client-5:
remote operation failed [Transport endpoint is not connected]</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 15:22:25.962618]
W [MSGID: 108019] [afr-lk-common.c:601:is_<wbr>blocking_locks_count_<wbr>sufficient]
2-gv1-replicate-2: Unable to obtain blocking inode lock on even one child
for gfid:24a48cae-53fe-4634-8fb7-<wbr>0254c85ad672.</span>
<br><span style="font-size:12pt;font-family:Arial">[2018-08-14 15:22:25.962668]
W [fuse-bridge.c:1441:fuse_err_<wbr>cbk] 0-glusterfs-fuse: 3715808: FSYNC()
ERR =&gt; -1 (Transport endpoint is not connected)</span>
<br>
<br><span style="font-size:12pt;font-family:Arial">Volume configuration
-</span>
<br>
<br><span style="font-size:12pt;font-family:Arial">Volume Name: gv1</span>
<br><span style="font-size:12pt;font-family:Arial">Type: Distributed-Replicate</span>
<br><span style="font-size:12pt;font-family:Arial">Volume ID: 66ad703e-3bae-4e79-a0b7-<wbr>29ea38e8fcfc</span>
<br><span style="font-size:12pt;font-family:Arial">Status: Started</span>
<br><span style="font-size:12pt;font-family:Arial">Snapshot Count: 0</span>
<br><span style="font-size:12pt;font-family:Arial">Number of Bricks: 5
x 2 = 10</span>
<br><span style="font-size:12pt;font-family:Arial">Transport-type: tcp</span>
<br><span style="font-size:12pt;font-family:Arial">Bricks:</span>
<br><span style="font-size:12pt;font-family:Arial">Brick1: dc-vihi44:/gluster/bricks/<wbr>megabrick/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick2: dc-vihi45:/gluster/bricks/<wbr>megabrick/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick3: dc-vihi44:/gluster/bricks/<wbr>brick1/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick4: dc-vihi45:/gluster/bricks/<wbr>brick1/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick5: dc-vihi44:/gluster/bricks/<wbr>brick2_1/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick6: dc-vihi45:/gluster/bricks/<wbr>brick2/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick7: dc-vihi44:/gluster/bricks/<wbr>brick3/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick8: dc-vihi45:/gluster/bricks/<wbr>brick3/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick9: dc-vihi44:/gluster/bricks/<wbr>brick4/data</span>
<br><span style="font-size:12pt;font-family:Arial">Brick10: dc-vihi45:/gluster/bricks/<wbr>brick4/data</span>
<br><span style="font-size:12pt;font-family:Arial">Options Reconfigured:</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.min-free-inodes:
6%</span>
<br><span style="font-size:12pt;font-family:Arial">performance.client-io-threads:
off</span>
<br><span style="font-size:12pt;font-family:Arial">nfs.disable: on</span>
<br><span style="font-size:12pt;font-family:Arial">transport.address-family:
inet</span>
<br><span style="font-size:12pt;font-family:Arial">performance.quick-read:
off</span>
<br><span style="font-size:12pt;font-family:Arial">performance.read-ahead:
off</span>
<br><span style="font-size:12pt;font-family:Arial">performance.io-cache:
off</span>
<br><span style="font-size:12pt;font-family:Arial">performance.low-prio-threads:
32</span>
<br><span style="font-size:12pt;font-family:Arial">network.remote-dio:
enable</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.eager-lock:
enable</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.server-quorum-type:
server</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.data-self-heal-<wbr>algorithm:
full</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.locking-scheme:
granular</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.shd-max-threads:
8</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.shd-wait-qlength:
10000</span>
<br><span style="font-size:12pt;font-family:Arial">user.cifs: off</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.choose-local:
off</span>
<br><span style="font-size:12pt;font-family:Arial">features.shard: on</span>
<br><span style="font-size:12pt;font-family:Arial">cluster.server-quorum-ratio:
51%</span>
<br><span class="HOEnZb"><font color="#888888">
<br><span style="font-size:12pt;font-family:Arial">-Walter Deignan<br>
-Uline IT, Systems Architect</span></font></span><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>
</div></div>