<div dir="ltr">This sounds like it may be a different issue. Can you file a bug for this ([1]) and provide all the logs/information you have on this (dir name, files on bricks, mount logs etc)?<div><br></div><div>Thanks,</div><div>Nithya</div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS">https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS</a></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 4 April 2018 at 19:03, Gudrun Mareike Amedick <span dir="ltr">&lt;<a href="mailto:g.amedick@uni-luebeck.de" target="_blank">g.amedick@uni-luebeck.de</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I&#39;m currently facing the same behaviour. <br>
<br>
Today, one of my users tried to delete a folder. It failed, saying the directory wasn&#39;t empty. ls -lah showed an empty folder but on the bricks I<br>
found some files. Renaming the directory caused it to reappear.<br>
<br>
We&#39;re running gluster 3.12.7-1 on Debian 9 from the repositories provided by <a href="http://gluster.org" rel="noreferrer" target="_blank">gluster.org</a>, upgraded from 3.8 a while ago. The volume is mounted via the<br>
fuse client.Our settings are:<br>
&gt; gluster volume info $VOLUMENAME<br>
&gt;  <br>
&gt; Volume Name: $VOLUMENAME<br>
&gt; Type: Distribute<br>
&gt; Volume ID: 0d210c70-e44f-46f1-862c-<wbr>ef260514c9f1<br>
&gt; Status: Started<br>
&gt; Snapshot Count: 0<br>
&gt; Number of Bricks: 23<br>
&gt; Transport-type: tcp<br>
&gt; Bricks:<br>
&gt; Brick1: gluster02:/srv/glusterfs/<wbr>bricks/DATA201/data<br>
&gt; Brick2: gluster02:/srv/glusterfs/<wbr>bricks/DATA202/data<br>
&gt; Brick3: gluster02:/srv/glusterfs/<wbr>bricks/DATA203/data<br>
&gt; Brick4: gluster02:/srv/glusterfs/<wbr>bricks/DATA204/data<br>
&gt; Brick5: gluster02:/srv/glusterfs/<wbr>bricks/DATA205/data<br>
&gt; Brick6: gluster02:/srv/glusterfs/<wbr>bricks/DATA206/data<br>
&gt; Brick7: gluster02:/srv/glusterfs/<wbr>bricks/DATA207/data<br>
&gt; Brick8: gluster02:/srv/glusterfs/<wbr>bricks/DATA208/data<br>
&gt; Brick9: gluster01:/srv/glusterfs/<wbr>bricks/DATA110/data<br>
&gt; Brick10: gluster01:/srv/glusterfs/<wbr>bricks/DATA111/data<br>
&gt; Brick11: gluster01:/srv/glusterfs/<wbr>bricks/DATA112/data<br>
&gt; Brick12: gluster01:/srv/glusterfs/<wbr>bricks/DATA113/data<br>
&gt; Brick13: gluster01:/srv/glusterfs/<wbr>bricks/DATA114/data<br>
&gt; Brick14: gluster02:/srv/glusterfs/<wbr>bricks/DATA209/data<br>
&gt; Brick15: gluster01:/srv/glusterfs/<wbr>bricks/DATA101/data<br>
&gt; Brick16: gluster01:/srv/glusterfs/<wbr>bricks/DATA102/data<br>
&gt; Brick17: gluster01:/srv/glusterfs/<wbr>bricks/DATA103/data<br>
&gt; Brick18: gluster01:/srv/glusterfs/<wbr>bricks/DATA104/data<br>
&gt; Brick19: gluster01:/srv/glusterfs/<wbr>bricks/DATA105/data<br>
&gt; Brick20: gluster01:/srv/glusterfs/<wbr>bricks/DATA106/data<br>
&gt; Brick21: gluster01:/srv/glusterfs/<wbr>bricks/DATA107/data<br>
&gt; Brick22: gluster01:/srv/glusterfs/<wbr>bricks/DATA108/data<br>
&gt; Brick23: gluster01:/srv/glusterfs/<wbr>bricks/DATA109/data<br>
&gt; Options Reconfigured:<br>
&gt; nfs.addr-namelookup: off<br>
&gt; transport.address-family: inet<br>
&gt; nfs.disable: on<br>
&gt; diagnostics.brick-log-level: ERROR<br>
&gt; performance.readdir-ahead: on<br>
&gt; auth.allow: $IP RANGE<br>
&gt; features.quota: on<br>
&gt; features.inode-quota: on<br>
&gt; features.quota-deem-statfs: on<br>
<br>
We had a scheduled reboot yesterday.<br>
<br>
Kind regards<br>
<span class="HOEnZb"><font color="#888888"><br>
Gudrun Amedick<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko:<br>
&gt; Right now the volume is running with<br>
&gt;<br>
&gt; readdir-optimize off<br>
&gt; parallel-readdir off<br>
&gt;<br>
&gt; On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran &lt;<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>&gt; wrote:<br>
&gt; &gt; Hi Serg,<br>
&gt; &gt;<br>
&gt; &gt; Do you mean that turning off readdir-optimize did not work? Or did you mean turning off parallel-readdir did not work?<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On 4 April 2018 at 10:48, Serg Gulko &lt;<a href="mailto:s.gulko@gmail.com">s.gulko@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; Hello! <br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Unfortunately no. <br>
&gt; &gt; &gt; Directory still not listed using ls -la, but I can cd into.<br>
&gt; &gt; &gt; I can rename it and it becomes available when I rename it back to the original name it&#39;s disappeared again. <br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa &lt;<a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko &lt;<a href="mailto:s.gulko@gmail.com">s.gulko@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; Hello! <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; We are running distributed volume that contains 7 bricks. <br>
&gt; &gt; &gt; &gt; &gt; Volume is mounted using native fuse client. <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; After an unexpected system reboot, some files are disappeared from fuse mount point but still available on the bricks. <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; The way it disappeared confusing me a lot. I can&#39;t see certain directories using ls -la but, at the same time, can cd into the missed<br>
&gt; &gt; &gt; &gt; &gt; directory.  I can rename the invisible directory and it becomes accessible. When I renamed it back to the original name, it becomes<br>
&gt; &gt; &gt; &gt; &gt; invisible. <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I also tried to mount the same volume into another location and run ls hoping that selfheal will fix the problem. Unfortunately, it did<br>
&gt; &gt; &gt; &gt; &gt; not. <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Is there a way to bring our storage to normal?<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Can you check whether turning off option performance.readdir-ahead helps?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; glusterfs 3.8.8 built on Jan 11 2017 16:33:17<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Serg Gulko <br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; ______________________________<wbr>_________________<br>
&gt; &gt; &gt; &gt; &gt; Gluster-users mailing list<br>
&gt; &gt; &gt; &gt; &gt; <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt; &gt; &gt; &gt; &gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ______________________________<wbr>_________________<br>
&gt; &gt; &gt; Gluster-users mailing list<br>
&gt; &gt; &gt; <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt; &gt; &gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
&gt; &gt;<br>
&gt; ______________________________<wbr>_________________<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></div></div></blockquote></div><br></div>