[Gluster-devel] Any mature(better) solution(way) to handle slow performance on 'ls -l, '.
Yanfei Wang
backyes at gmail.com
Mon May 28 09:37:44 UTC 2018
Hi experts on glusterFS,
In our testbed, we found that the ' ls -l' performance is pretty slow.
Indeed from the prospect of glusterFS design space, we need to avoid
'ls ' directory which will traverse all bricks sequentially in our
current knowledge.
We use generic setting for our testbed:
```
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
Status: Started
Snapshot Count: 0
Number of Bricks: 19 x 3 = 57
Transport-type: tcp
Bricks:
...
Options Reconfigured:
features.inode-quota: off
features.quota: off
cluster.quorum-reads: on
cluster.quorum-count: 2
cluster.quorum-type: fixed
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.server-quorum-ratio: 51%
```
Carefully consulting docs, the NFS client is preferred client solution
for better 'ls' performance. However, this better performance comes
from caching meta info locally, I think, and the caching mechanism
will cause the penalty of data coherence, right?
I want to know what's the best or mature way to trade-off the 'ls '
performance with data coherence in in reality? Any comments are
welcome.
Thanks.
-Fei
More information about the Gluster-devel
mailing list