[Gluster-devel] Any mature(better) solution(way) to handle slow performance on 'ls -l, '.
pgurusid at redhat.com
Thu Jun 7 10:59:38 UTC 2018
If you are not using applications that rely on 100% metadata consistency,
like Databases, Kafka, AMQ etc. you can use the below mentioned volume
# gluster volume set <volname> group metadata-cache
# gluster volume set <volname> network.inode-lru-limit 200000
# gluster volume set <VOLNAME> performance.readdir-ahead on
# gluster volume set <VOLNAME> performance.parallel-readdir on
For more information refer to 
Also, which version og Gluster are you using? Its preferred to use 3.11 or
above for these perf enhancements.
Note that parallel-readdir is going to help in increasing the ls -l
performance drastically in your case, but there are few corner case known
On Wed, May 30, 2018, 8:29 PM Yanfei Wang <backyes at gmail.com> wrote:
> Hi experts on glusterFS,
> In our testbed, we found that the ' ls -l' performance is pretty slow.
> Indeed from the prospect of glusterFS design space, we need to avoid
> 'ls ' directory which will traverse all bricks sequentially in our
> current knowledge.
> We use generic setting for our testbed:
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 19 x 3 = 57
> Transport-type: tcp
> Options Reconfigured:
> features.inode-quota: off
> features.quota: off
> cluster.quorum-reads: on
> cluster.quorum-count: 2
> cluster.quorum-type: fixed
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.server-quorum-ratio: 51%
> Carefully consulting docs, the NFS client is preferred client solution
> for better 'ls' performance. However, this better performance comes
> from caching meta info locally, I think, and the caching mechanism
> will cause the penalty of data coherence, right?
> I want to know what's the best or mature way to trade-off the 'ls '
> performance with data coherence in in reality? Any comments are
> Gluster-devel mailing list
> Gluster-devel at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-devel