[Gluster-users] Very slow 'ls' ?

Diego Zuccato diego.zuccato at unibo.it
Fri Jan 15 13:09:38 UTC 2021


Hello all.

I have a volume configured as:
-8<--
root at str957-clustor00:~# gluster v info cluster_data


Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 21 x (2 + 1) = 63
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/quorum/00/d (arbiter)
[...]
Brick61: clustor01:/srv/bricks/13/d
Brick62: clustor02:/srv/bricks/13/d
Brick63: clustor00:/srv/quorum/06/d (arbiter)
Options Reconfigured:
client.event-threads: 2
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
features.default-soft-limit: 90
cluster.self-heal-daemon: enable
-8<--

Connection between client and server is via InfiniBand (40G from the
client, 100G between storage nodes), using ipoib (IIUC RDMA is
deprecated and unmaintained).

A simple "ls -ln" ('n' to avoid delays due to lookups) for a folder with
just 7 entries takes more than 4s on the first run, ~1s on the next one
and a reasonable 0.1s on the third (if I'm fast enough).

I tried enabling client-io-threads, but seems it didn't change anything.

Any hints?

TIA!

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786


More information about the Gluster-users mailing list