<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 6 Feb 2019 at 14:34, Hu Bert &lt;<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi there,<br>
<br>
just curious - from man mount.glusterfs:<br>
<br>
       lru-limit=N<br>
             Set fuse module&#39;s limit for number of inodes kept in LRU<br>
list to N [default: 0]<br></blockquote><div><br></div><div>Sorry, that is a bug in the man page and we will fix that. The current default is 131072:</div><div><div>    {                                                                           </div><div>        .key = {&quot;lru-limit&quot;},                                                   </div><div>        .type = GF_OPTION_TYPE_INT,                                             </div><div>        .default_value = &quot;131072&quot;,                                              </div><div>        .min = 0,                                                               </div><div>        .description = &quot;makes glusterfs invalidate kernel inodes after &quot;        </div><div>                       &quot;reaching this limit (0 means &#39;unlimited&#39;)&quot;,             </div><div>    },                                                                          </div><div>  </div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
This seems to be the default already? Set it explicitly?<br>
<br>
Regards,<br>
Hubert<br>
<br>
Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya Balachandran<br>
&lt;<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>&gt;:<br>
&gt;<br>
&gt; Hi,<br>
&gt;<br>
&gt; The client logs indicates that the mount process has crashed.<br>
&gt; Please try mounting the volume with the volume option lru-limit=0 and see if it still crashes.<br>
&gt;<br>
&gt; Thanks,<br>
&gt; Nithya<br>
&gt;<br>
&gt; On Thu, 24 Jan 2019 at 12:47, Hu Bert &lt;<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; Good morning,<br>
&gt;&gt;<br>
&gt;&gt; we currently transfer some data to a new glusterfs volume; to check<br>
&gt;&gt; the throughput of the new volume/setup while the transfer is running i<br>
&gt;&gt; decided to create some files on one of the gluster servers with dd in<br>
&gt;&gt; loop:<br>
&gt;&gt;<br>
&gt;&gt; while true; do dd if=/dev/urandom of=/shared/private/1G.file bs=1M<br>
&gt;&gt; count=1024; rm /shared/private/1G.file; done<br>
&gt;&gt;<br>
&gt;&gt; /shared/private is the mount point of the glusterfs volume. The dd<br>
&gt;&gt; should run for about an hour. But now it happened twice that during<br>
&gt;&gt; this loop the transport endpoint gets disconnected:<br>
&gt;&gt;<br>
&gt;&gt; dd: failed to open &#39;/shared/private/1G.file&#39;: Transport endpoint is<br>
&gt;&gt; not connected<br>
&gt;&gt; rm: cannot remove &#39;/shared/private/1G.file&#39;: Transport endpoint is not connected<br>
&gt;&gt;<br>
&gt;&gt; In the /var/log/glusterfs/shared-private.log i see:<br>
&gt;&gt;<br>
&gt;&gt; [2019-01-24 07:03:28.938745] W [MSGID: 108001]<br>
&gt;&gt; [afr-transaction.c:1062:afr_handle_quorum] 0-persistent-replicate-0:<br>
&gt;&gt; 7212652e-c437-426c-a0a9-a47f5972fffe: Failing WRITE as quorum i<br>
&gt;&gt; s not met [Transport endpoint is not connected]<br>
&gt;&gt; [2019-01-24 07:03:28.939280] E [mem-pool.c:331:__gf_free]<br>
&gt;&gt; (--&gt;/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/cluster/replicate.so(+0x5be8c)<br>
&gt;&gt; [0x7eff84248e8c] --&gt;/usr/lib/x86_64-lin<br>
&gt;&gt; ux-gnu/glusterfs/5.3/xlator/cluster/replicate.so(+0x5be18)<br>
&gt;&gt; [0x7eff84248e18]<br>
&gt;&gt; --&gt;/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(__gf_free+0xf6)<br>
&gt;&gt; [0x7eff8a9485a6] ) 0-: Assertion failed:<br>
&gt;&gt; GF_MEM_TRAILER_MAGIC == *(uint32_t *)((char *)free_ptr + header-&gt;size)<br>
&gt;&gt; [----snip----]<br>
&gt;&gt;<br>
&gt;&gt; The whole output can be found here: <a href="https://pastebin.com/qTMmFxx0" rel="noreferrer" target="_blank">https://pastebin.com/qTMmFxx0</a><br>
&gt;&gt; gluster volume info here: <a href="https://pastebin.com/ENTWZ7j3" rel="noreferrer" target="_blank">https://pastebin.com/ENTWZ7j3</a><br>
&gt;&gt;<br>
&gt;&gt; After umount + mount the transport endpoint is connected again - until<br>
&gt;&gt; the next disconnect. A /core file gets generated. Maybe someone wants<br>
&gt;&gt; to have a look at this file?<br>
&gt;&gt; _______________________________________________<br>
&gt;&gt; Gluster-users mailing list<br>
&gt;&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt;&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div></div></div></div>