[Gluster-users] file metadata operations performance - gluster 4.1
Amar Tumballi
atumball at redhat.com
Tue Aug 28 06:40:33 UTC 2018
One of the observation we had with git clone like work load was, nl-cache
(negative-lookup cache), helps here.
Try 'gluster volume set $volume-name nl-cache enable'.
Also sharing the 'profile info' during this performance observations also
helps us to narrow down the situation.
More on how to capture profile info @
https://hackmd.io/PhhT5jPdQIKxzfeLQmnjJQ?view
-Amar
On Thu, Aug 23, 2018 at 7:11 PM, Davide Obbi <davide.obbi at booking.com>
wrote:
> Hello,
>
> did anyone ever managed to achieve reasonable waiting time while
> performing metadata intensive operations such as git clone, untar etc...?
> Is this possible workload or will never be in scope for glusterfs?
>
> I'd like to know, if possible, what would be the options that affect such
> volume performances.
> Albeit i managed to achieve decent git status/git grep operations, 3 and
> 30 secs, the git clone and untarring a file from/to the same share take
> ages. for a git repo of aprox 6GB.
>
> I'm running a test environment with 3 way replica 128GB RAM and 24 cores
> are 2.40GHz, one internal SSD dedicated to the volume brick and 10Gb
> network
>
> The options set so far that affects volume performances are:
> 48 performance.readdir-ahead: on
> 49 features.cache-invalidation-timeout: 600
> 50 features.cache-invalidation: on
> 51 performance.md-cache-timeout: 600
> 52 performance.stat-prefetch: on
> 53 performance.cache-invalidation: on
> 54 performance.parallel-readdir: on
> 55 network.inode-lru-limit: 900000
> 56 performance.io-thread-count: 32
> 57 performance.cache-size: 10GB
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180828/bab90035/attachment.html>
More information about the Gluster-users
mailing list