[Gluster-users] [External] Re: file metadata operations performance - gluster 4.1

Raghavendra Gowdappa rgowdapp at redhat.com
Thu Aug 30 15:00:51 UTC 2018


On Thu, Aug 30, 2018 at 8:08 PM, Davide Obbi <davide.obbi at booking.com>
wrote:

> Thanks Amar,
>
> i have enabled the negative lookups cache on the volume:
>
> To deflate a tar archive (not compressed) of 1.3GB it takes aprox 9mins
> which can be considered a slight improvement from the previous 12-15
> however still not fast enough compared to local disk. The tar is present on
> the gluster share/volume and deflated inside the same folder structure.
>

I am assuming this is with parallel-readdir enabled, right?


> Running the operation twice (without removing the already deflated files)
> also did not reduce the time spent.
>
> Running the operation with the tar archive on local disk made no difference
>
> What really made a huge difference while git cloning was setting
> "performance.parallel-readdir on". During the phase "Receiving objects" ,
> as i enabled the xlator it bumped up from 3/4MBs to 27MBs
>

What is the distribute count? Is it 1x3 replica?


> So in conclusion i'm trying to make the untar operation working at an
> acceptable level, not expecting local disks speed but at least being within
> the 4mins
>
> I have attached the profiles collected at the end of the untar operations
> with the archive on the mount and outside
>
> thanks
> Davide
>
>
> On Tue, Aug 28, 2018 at 8:41 AM Amar Tumballi <atumball at redhat.com> wrote:
>
>> One of the observation we had with git clone like work load was, nl-cache
>> (negative-lookup cache), helps here.
>>
>> Try 'gluster volume set $volume-name nl-cache enable'.
>>
>> Also sharing the 'profile info' during this performance observations also
>> helps us to narrow down the situation.
>>
>> More on how to capture profile info @ https://hackmd.io/
>> PhhT5jPdQIKxzfeLQmnjJQ?view
>>
>> -Amar
>>
>>
>> On Thu, Aug 23, 2018 at 7:11 PM, Davide Obbi <davide.obbi at booking.com>
>> wrote:
>>
>>> Hello,
>>>
>>> did anyone ever managed to achieve reasonable waiting time while
>>> performing metadata intensive operations such as git clone, untar etc...?
>>> Is this possible workload or will never be in scope for glusterfs?
>>>
>>> I'd like to know, if possible, what would be the options that affect
>>> such volume performances.
>>> Albeit i managed to achieve decent git status/git grep operations, 3 and
>>> 30 secs, the git clone and untarring a file from/to the same share take
>>> ages. for a git repo of aprox 6GB.
>>>
>>> I'm running a test environment with 3 way replica 128GB RAM and 24 cores
>>> are  2.40GHz, one internal SSD dedicated to the volume brick and 10Gb
>>> network
>>>
>>> The options set so far that affects volume performances are:
>>>  48 performance.readdir-ahead: on
>>>  49 features.cache-invalidation-timeout: 600
>>>  50 features.cache-invalidation: on
>>>  51 performance.md-cache-timeout: 600
>>>  52 performance.stat-prefetch: on
>>>  53 performance.cache-invalidation: on
>>>  54 performance.parallel-readdir: on
>>>  55 network.inode-lru-limit: 900000
>>>  56 performance.io-thread-count: 32
>>>  57 performance.cache-size: 10GB
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>>
>
>
> --
> Davide Obbi
> System Administrator
>
> Booking.com B.V.
> Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
> Direct +31207031558
> [image: Booking.com] <https://www.booking.com/>
> The world's #1 accommodation site
> 43 languages, 198+ offices worldwide, 120,000+ global destinations,
> 1,550,000+ room nights booked every day
> No booking fees, best price always guaranteed
> Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180830/33e71496/attachment.html>


More information about the Gluster-users mailing list