[Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

sankarshan sankarshan at kadalu.io
Wed Feb 26 03:16:44 UTC 2020


What is the configuration/sizing on which these tests are conducted?
Do you need any additional help from others on the patches which you
have used for the tests?

On Tue, 25 Feb 2020 at 13:32, Mohit Agrawal <moagrawa at redhat.com> wrote:
>
> With these 2 changes, we are getting a good improvement in file creation and
> slight improvement in the "ls-l" operation.
>
> We are still working to improve the same.
>
> To validate the same we have executed below script from 6 different clients on 24x3 distributed
> replicate environment after enabling performance related option
>
> mkdir /gluster-mount/`hostname`
> date;
> for i in {1..100}
> do
> echo "directory $i is created" `date`
> mkdir /gluster-mount/`hostname`/dir$i
> tar -xvf /root/kernel_src/linux-5.4-rc8.tar.xz -C /gluster-mount/`hostname`/dir$i >/dev/null
> done
>
> With no Patch
> tar was taking almost 36-37 hours
>
> With Patch
> tar is taking almost 26 hours
>
> We were getting a similar kind of improvement in smallfile tool also.
>
> On Tue, Feb 25, 2020 at 1:29 PM Mohit Agrawal <moagrawa at redhat.com> wrote:
>>
>> Hi,
>> We observed performance is mainly hurt while .glusterfs is having huge data.As we know before executing a fop in POSIX xlator it builds an internal path based on GFID.To validate the path it call's (l)stat system call and while .glusterfs is heavily loaded kernel takes time to lookup inode and due to that performance drops
>> To improve the same we tried two things with this patch(https://review.gluster.org/#/c/glusterfs/+/23783/)
>>
>> 1) To keep the first level entry always in a cache so that inode lookup will be faster       we have to keep open first level fds(00 to ff total 256) per brick at the time of starting a brick process. Even in case of cache cleanup kernel will not evict first level fds from the cache and performance will improve
>>
>> 2) We tried using "at" based call(lstatat,fstatat,readlinat etc) instead of accessing complete path access relative path, these call's were also helpful to improve performance.
>>
>> Regards,
>> Mohit Agrawal
>>
>>


-- 
sankarshan at kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!


More information about the Gluster-devel mailing list