[Gluster-users] Invisible files and directories

Gudrun Mareike Amedick g.amedick at uni-luebeck.de
Wed Apr 4 13:33:34 UTC 2018


Hi,

I'm currently facing the same behaviour. 

Today, one of my users tried to delete a folder. It failed, saying the directory wasn't empty. ls -lah showed an empty folder but on the bricks I
found some files. Renaming the directory caused it to reappear.

We're running gluster 3.12.7-1 on Debian 9 from the repositories provided by gluster.org, upgraded from 3.8 a while ago. The volume is mounted via the
fuse client.Our settings are:
> gluster volume info $VOLUMENAME
>  
> Volume Name: $VOLUMENAME
> Type: Distribute
> Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 23
> Transport-type: tcp
> Bricks:
> Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data
> Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data
> Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data
> Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data
> Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data
> Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data
> Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data
> Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data
> Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data
> Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data
> Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data
> Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data
> Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data
> Brick14: gluster02:/srv/glusterfs/bricks/DATA209/data
> Brick15: gluster01:/srv/glusterfs/bricks/DATA101/data
> Brick16: gluster01:/srv/glusterfs/bricks/DATA102/data
> Brick17: gluster01:/srv/glusterfs/bricks/DATA103/data
> Brick18: gluster01:/srv/glusterfs/bricks/DATA104/data
> Brick19: gluster01:/srv/glusterfs/bricks/DATA105/data
> Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data
> Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data
> Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data
> Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data
> Options Reconfigured:
> nfs.addr-namelookup: off
> transport.address-family: inet
> nfs.disable: on
> diagnostics.brick-log-level: ERROR
> performance.readdir-ahead: on
> auth.allow: $IP RANGE
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on

We had a scheduled reboot yesterday.

Kind regards

Gudrun Amedick


Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko:
> Right now the volume is running with
> 
> readdir-optimize off
> parallel-readdir off
> 
> On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <nbalacha at redhat.com> wrote:
> > Hi Serg,
> > 
> > Do you mean that turning off readdir-optimize did not work? Or did you mean turning off parallel-readdir did not work?
> > 
> > 
> > 
> > On 4 April 2018 at 10:48, Serg Gulko <s.gulko at gmail.com> wrote:
> > > Hello! 
> > > 
> > > Unfortunately no. 
> > > Directory still not listed using ls -la, but I can cd into.
> > > I can rename it and it becomes available when I rename it back to the original name it's disappeared again. 
> > > 
> > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote:
> > > > 
> > > > 
> > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko <s.gulko at gmail.com> wrote:
> > > > > Hello! 
> > > > > 
> > > > > We are running distributed volume that contains 7 bricks. 
> > > > > Volume is mounted using native fuse client. 
> > > > > 
> > > > > After an unexpected system reboot, some files are disappeared from fuse mount point but still available on the bricks. 
> > > > > 
> > > > > The way it disappeared confusing me a lot. I can't see certain directories using ls -la but, at the same time, can cd into the missed
> > > > > directory.  I can rename the invisible directory and it becomes accessible. When I renamed it back to the original name, it becomes
> > > > > invisible. 
> > > > > 
> > > > > I also tried to mount the same volume into another location and run ls hoping that selfheal will fix the problem. Unfortunately, it did
> > > > > not. 
> > > > > 
> > > > > Is there a way to bring our storage to normal?
> > > > > 
> > > > Can you check whether turning off option performance.readdir-ahead helps?
> > > > 
> > > > > 
> > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17
> > > > > 
> > > > > Serg Gulko 
> > > > > 
> > > > > _______________________________________________
> > > > > Gluster-users mailing list
> > > > > Gluster-users at gluster.org
> > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > > 
> > > > 
> > > 
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 6743 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180404/07810421/attachment.bin>


More information about the Gluster-users mailing list