[Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing

bugzilla at redhat.com bugzilla at redhat.com
Thu Apr 4 08:48:42 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1670382

joao.bauto at neuro.fchampalimaud.org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |joao.bauto at neuro.fchampalim
                   |                            |aud.org



--- Comment #9 from joao.bauto at neuro.fchampalimaud.org ---
So I think I'm hitting this bug also.

I have an 8 brick distributed volume where Windows and Linux clients mount the
volume via samba and headless compute servers using gluster native fuse. With
parallel-readdir on, if a Windows client creates a new folder, the folder is
indeed created but invisible to the Windows client. Accessing the same samba
share in a Linux client, the folder is again visible and with normal behaviour.
The same folder is also visible when mounting via gluster native fuse.

The Windows client can list existing directories and rename them while, for
files, everything seems to be working fine.

Gluster servers: CentOS 7.5 with Gluster 5.3 and Samba 4.8.3-4.el7.0.1 from
@fasttrack

Clients tested: Windows 10, Ubuntu 18.10, CentOS 7.5

Volume Name: tank
Type: Distribute
Volume ID: 9582685f-07fa-41fd-b9fc-ebab3a6989cf
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: swp-gluster-01:/tank/volume1/brick
Brick2: swp-gluster-02:/tank/volume1/brick
Brick3: swp-gluster-03:/tank/volume1/brick
Brick4: swp-gluster-04:/tank/volume1/brick
Brick5: swp-gluster-01:/tank/volume2/brick
Brick6: swp-gluster-02:/tank/volume2/brick
Brick7: swp-gluster-03:/tank/volume2/brick
Brick8: swp-gluster-04:/tank/volume2/brick
Options Reconfigured:
performance.parallel-readdir: on
performance.readdir-ahead: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
storage.batch-fsync-delay-usec: 0
performance.write-behind-window-size: 32MB
performance.stat-prefetch: on
performance.read-ahead: on
performance.read-ahead-page-count: 16
performance.rda-request-size: 131072
performance.quick-read: on
performance.open-behind: on
performance.nl-cache-timeout: 600
performance.nl-cache: on
performance.io-thread-count: 64
performance.io-cache: off
performance.flush-behind: on
performance.client-io-threads: off
performance.write-behind: off
performance.cache-samba-metadata: on
network.inode-lru-limit: 0
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
cluster.readdir-optimize: on
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 16
features.quota-deem-statfs: on
nfs.disable: on
features.quota: on
features.inode-quota: on
cluster.enable-shared-storage: disable

Cheers

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list