[Gluster-users] A Problem of readdir-optimize

Nithya Balachandran nbalacha at redhat.com
Fri Jan 5 02:41:42 UTC 2018


>From Karan:

> We had a similar issue when we were certifying gluster + milestone. But
the issue got resolved when we disabled readdir-ahead. Looks like the
>issue is in readdir code path.


Paul, can you try turning off performance.readdir-ahead and see if the
issue persists?



On 29 December 2017 at 19:14, Paul <flypen at gmail.com> wrote:

> Hi Nithya,
>
> GlusterFS version is 3.11.0, and  we use the native client of GlusterFS.
> Please see the below information.
>
> $gluster v info vol
>
> Volume Name: vol
> Type: Distributed-Replicate
> Volume ID: d59bd014-3b8b-411a-8587-ee36d254f755
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 90 x 2 = 180
> Transport-type: tcp,rdma
> Bricks:
> ...
>
> Options Reconfigured:
> performance.force-readdirp: false
> dht.force-readdirp: off
> performance.read-ahead: on
> performance.client-io-threads: on
> diagnostics.client-sys-log-level: CRITICAL
> cluster.entry-self-heal: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: on
> diagnostics.client-log-level: INFO
> diagnostics.brick-log-level: INFO
> cluster.lookup-unhashed: on
> performance.parallel-readdir: on
> cluster.readdir-optimize: off
> performance.write-behind-window-size: 32MB
> performance.cache-refresh-timeout: 5
> features.inode-quota: off
> features.quota: off
> user.ftp.anon: NO
> user.vol.snapshot: enable
> user.nfsganesha: enable
> features.trash-internal-op: off
> features.trash: off
> diagnostics.stats-dump-interval: 10
> server.event-threads: 16
> client.event-threads: 8
> server.keepalive-count: 1
> server.keepalive-interval: 1
> server.keepalive-time: 2
> transport.keepalive: 1
> client.keepalive-count: 1
> client.keepalive-interval: 1
> client.keepalive-time: 2
> features.cache-invalidation: off
> network.ping-timeout: 30
> user.smb.guest: no
> user.id: 8148
> nfs.disable: on
> snap-activate-on-create: enable
>
> Thanks,
> Paul
>
> On Thu, Dec 28, 2017 at 11:25 PM, Nithya Balachandran <nbalacha at redhat.com
> > wrote:
>
>> Hi Paul,
>>
>> A few questions:
>> What type of volume is this and what client protocol are you using?
>> What version of Gluster are you using?
>>
>> Regards,
>> Nithya
>>
>> On 28 December 2017 at 20:09, Paul <flypen at gmail.com> wrote:
>>
>>> Hi, All,
>>>
>>> If I set cluster.readdir-optimize to on, the performance of "ls" is
>>> better, but I find one problem.
>>>
>>> # ls
>>> # ls
>>> files.1  files.2 file.3
>>>
>>> I run ls twice. At the first time, ls returns nothing. At the second
>>> time, ls returns all file names.
>>>
>>> If turn off cluster.readdir-optimize, I don't see this problem.
>>>
>>> Is there a way to solve this problem? If ls doesn't return the correct
>>> file names,
>>>
>>> Thanks,
>>> Paul
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180105/d6461b1b/attachment.html>


More information about the Gluster-users mailing list