[Gluster-users] parallel-readdir prevents directories and files listing - Bug 1670382

João Baúto joao.bauto at neuro.fchampalimaud.org
Thu May 2 09:54:44 UTC 2019


Thanks for the reply Amar.

Last I knew, we recommended to avoid fuse and samba shares on same volume
> (Mainly as we couldn't spend a lot of effort on testing the configuration).


Does this also apply to samba shares when using vfs glusterfs?

Anyways, we would treat the behavior as bug for sure. One possible path
> looking at below volume info is to disable 'stat-prefetch' option and see
> if it helps. Next option I would try is to disable readdir-ahead.


I'll try and give feedback.

Thanks,
João

Amar Tumballi Suryanarayan <atumball at redhat.com> escreveu no dia quarta,
1/05/2019 à(s) 13:30:

>
>
> On Mon, Apr 29, 2019 at 3:56 PM João Baúto <
> joao.bauto at neuro.fchampalimaud.org> wrote:
>
>> Hi,
>>
>> I have an 8 brick distributed volume where Windows and Linux clients
>> mount the volume via samba and headless compute servers using gluster
>> native fuse. With parallel-readdir on, if a Windows client creates a new
>> folder, the folder is indeed created but invisible to the Windows client.
>> Accessing the same samba share in a Linux client, the folder is again
>> visible and with normal behavior. The same folder is also visible when
>> mounting via gluster native fuse.
>>
>> The Windows client can list existing directories and rename them while,
>> for files, everything seems to be working fine.
>>
>> Gluster servers: CentOS 7.5 with Gluster 5.3 and Samba 4.8.3-4.el7.0.1
>> from @fasttrack
>> Clients tested: Windows 10, Ubuntu 18.10, CentOS 7.5
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1670382
>>
>
> Thanks for the bug report. Will look into this, and get back.
>
> Last I knew, we recommended to avoid fuse and samba shares on same volume
> (Mainly as we couldn't spend a lot of effort on testing the configuration).
> Anyways, we would treat the behavior as bug for sure. One possible path
> looking at below volume info is to disable 'stat-prefetch' option and see
> if it helps. Next option I would try is to disable readdir-ahead.
>
> Regards,
> Amar
>
>
>> <https://bugzilla.redhat.com/show_bug.cgi?id=1670382>
>>
>> Volume Name: tank
>> Type: Distribute
>> Volume ID: 9582685f-07fa-41fd-b9fc-ebab3a6989cf
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: swp-gluster-01:/tank/volume1/brick
>> Brick2: swp-gluster-02:/tank/volume1/brick
>> Brick3: swp-gluster-03:/tank/volume1/brick
>> Brick4: swp-gluster-04:/tank/volume1/brick
>> Brick5: swp-gluster-01:/tank/volume2/brick
>> Brick6: swp-gluster-02:/tank/volume2/brick
>> Brick7: swp-gluster-03:/tank/volume2/brick
>> Brick8: swp-gluster-04:/tank/volume2/brick
>> Options Reconfigured:
>> performance.parallel-readdir: on
>> performance.readdir-ahead: on
>> performance.cache-invalidation: on
>> performance.md-cache-timeout: 600
>> storage.batch-fsync-delay-usec: 0
>> performance.write-behind-window-size: 32MB
>> performance.stat-prefetch: on
>> performance.read-ahead: on
>> performance.read-ahead-page-count: 16
>> performance.rda-request-size: 131072
>> performance.quick-read: on
>> performance.open-behind: on
>> performance.nl-cache-timeout: 600
>> performance.nl-cache: on
>> performance.io-thread-count: 64
>> performance.io-cache: off
>> performance.flush-behind: on
>> performance.client-io-threads: off
>> performance.write-behind: off
>> performance.cache-samba-metadata: on
>> network.inode-lru-limit: 0
>> features.cache-invalidation-timeout: 600
>> features.cache-invalidation: on
>> cluster.readdir-optimize: on
>> cluster.lookup-optimize: on
>> client.event-threads: 4
>> server.event-threads: 16
>> features.quota-deem-statfs: on
>> nfs.disable: on
>> features.quota: on
>> features.inode-quota: on
>> cluster.enable-shared-storage: disable
>>
>> Cheers,
>>
>> João Baúto
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190502/54d2a74a/attachment.html>


More information about the Gluster-users mailing list