<div dir="ltr">This sounds like it may be a different issue. Can you file a bug for this ([1]) and provide all the logs/information you have on this (dir name, files on bricks, mount logs etc)?<div><br></div><div>Thanks,</div><div>Nithya</div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS">https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS</a></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 4 April 2018 at 19:03, Gudrun Mareike Amedick <span dir="ltr"><<a href="mailto:g.amedick@uni-luebeck.de" target="_blank">g.amedick@uni-luebeck.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I'm currently facing the same behaviour. <br>
<br>
Today, one of my users tried to delete a folder. It failed, saying the directory wasn't empty. ls -lah showed an empty folder but on the bricks I<br>
found some files. Renaming the directory caused it to reappear.<br>
<br>
We're running gluster 3.12.7-1 on Debian 9 from the repositories provided by <a href="http://gluster.org" rel="noreferrer" target="_blank">gluster.org</a>, upgraded from 3.8 a while ago. The volume is mounted via the<br>
fuse client.Our settings are:<br>
> gluster volume info $VOLUMENAME<br>
> <br>
> Volume Name: $VOLUMENAME<br>
> Type: Distribute<br>
> Volume ID: 0d210c70-e44f-46f1-862c-<wbr>ef260514c9f1<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 23<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: gluster02:/srv/glusterfs/<wbr>bricks/DATA201/data<br>
> Brick2: gluster02:/srv/glusterfs/<wbr>bricks/DATA202/data<br>
> Brick3: gluster02:/srv/glusterfs/<wbr>bricks/DATA203/data<br>
> Brick4: gluster02:/srv/glusterfs/<wbr>bricks/DATA204/data<br>
> Brick5: gluster02:/srv/glusterfs/<wbr>bricks/DATA205/data<br>
> Brick6: gluster02:/srv/glusterfs/<wbr>bricks/DATA206/data<br>
> Brick7: gluster02:/srv/glusterfs/<wbr>bricks/DATA207/data<br>
> Brick8: gluster02:/srv/glusterfs/<wbr>bricks/DATA208/data<br>
> Brick9: gluster01:/srv/glusterfs/<wbr>bricks/DATA110/data<br>
> Brick10: gluster01:/srv/glusterfs/<wbr>bricks/DATA111/data<br>
> Brick11: gluster01:/srv/glusterfs/<wbr>bricks/DATA112/data<br>
> Brick12: gluster01:/srv/glusterfs/<wbr>bricks/DATA113/data<br>
> Brick13: gluster01:/srv/glusterfs/<wbr>bricks/DATA114/data<br>
> Brick14: gluster02:/srv/glusterfs/<wbr>bricks/DATA209/data<br>
> Brick15: gluster01:/srv/glusterfs/<wbr>bricks/DATA101/data<br>
> Brick16: gluster01:/srv/glusterfs/<wbr>bricks/DATA102/data<br>
> Brick17: gluster01:/srv/glusterfs/<wbr>bricks/DATA103/data<br>
> Brick18: gluster01:/srv/glusterfs/<wbr>bricks/DATA104/data<br>
> Brick19: gluster01:/srv/glusterfs/<wbr>bricks/DATA105/data<br>
> Brick20: gluster01:/srv/glusterfs/<wbr>bricks/DATA106/data<br>
> Brick21: gluster01:/srv/glusterfs/<wbr>bricks/DATA107/data<br>
> Brick22: gluster01:/srv/glusterfs/<wbr>bricks/DATA108/data<br>
> Brick23: gluster01:/srv/glusterfs/<wbr>bricks/DATA109/data<br>
> Options Reconfigured:<br>
> nfs.addr-namelookup: off<br>
> transport.address-family: inet<br>
> nfs.disable: on<br>
> diagnostics.brick-log-level: ERROR<br>
> performance.readdir-ahead: on<br>
> auth.allow: $IP RANGE<br>
> features.quota: on<br>
> features.inode-quota: on<br>
> features.quota-deem-statfs: on<br>
<br>
We had a scheduled reboot yesterday.<br>
<br>
Kind regards<br>
<span class="HOEnZb"><font color="#888888"><br>
Gudrun Amedick<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko:<br>
> Right now the volume is running with<br>
><br>
> readdir-optimize off<br>
> parallel-readdir off<br>
><br>
> On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <<a href="mailto:nbalacha@redhat.com">nbalacha@redhat.com</a>> wrote:<br>
> > Hi Serg,<br>
> ><br>
> > Do you mean that turning off readdir-optimize did not work? Or did you mean turning off parallel-readdir did not work?<br>
> ><br>
> ><br>
> ><br>
> > On 4 April 2018 at 10:48, Serg Gulko <<a href="mailto:s.gulko@gmail.com">s.gulko@gmail.com</a>> wrote:<br>
> > > Hello! <br>
> > ><br>
> > > Unfortunately no. <br>
> > > Directory still not listed using ls -la, but I can cd into.<br>
> > > I can rename it and it becomes available when I rename it back to the original name it's disappeared again. <br>
> > ><br>
> > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa <<a href="mailto:rgowdapp@redhat.com">rgowdapp@redhat.com</a>> wrote:<br>
> > > ><br>
> > > ><br>
> > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko <<a href="mailto:s.gulko@gmail.com">s.gulko@gmail.com</a>> wrote:<br>
> > > > > Hello! <br>
> > > > ><br>
> > > > > We are running distributed volume that contains 7 bricks. <br>
> > > > > Volume is mounted using native fuse client. <br>
> > > > ><br>
> > > > > After an unexpected system reboot, some files are disappeared from fuse mount point but still available on the bricks. <br>
> > > > ><br>
> > > > > The way it disappeared confusing me a lot. I can't see certain directories using ls -la but, at the same time, can cd into the missed<br>
> > > > > directory. I can rename the invisible directory and it becomes accessible. When I renamed it back to the original name, it becomes<br>
> > > > > invisible. <br>
> > > > ><br>
> > > > > I also tried to mount the same volume into another location and run ls hoping that selfheal will fix the problem. Unfortunately, it did<br>
> > > > > not. <br>
> > > > ><br>
> > > > > Is there a way to bring our storage to normal?<br>
> > > > ><br>
> > > > Can you check whether turning off option performance.readdir-ahead helps?<br>
> > > ><br>
> > > > ><br>
> > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17<br>
> > > > ><br>
> > > > > Serg Gulko <br>
> > > > ><br>
> > > > > ______________________________<wbr>_________________<br>
> > > > > Gluster-users mailing list<br>
> > > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > > > > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> > > > ><br>
> > > ><br>
> > ><br>
> > > ______________________________<wbr>_________________<br>
> > > Gluster-users mailing list<br>
> > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
> ><br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></div></div></blockquote></div><br></div>