[Gluster-users] ls performance on directories with small number of items

Aaron Roberts aroberts at domicilium.com
Wed Nov 29 10:11:01 UTC 2017


Thanks Sam/Julian,
               I understand roughly, that readir() and others are simply hard to solve with a distributed filesystem and that NFS can do this part faster.  I’d like to see if gluster can be tweaked a bit to get this working.

performance.stat-prefetch is set to ‘on’.

Would performance.md-cache-timeout help me?
It is set to 1 on my volume (default).  Would raising this help with servicing large number of hits for a single file/dir?

Thanks,
               Aaron

From: Joe Julian [mailto:joe at julianfamily.org]
Sent: 27 November 2017 23:45
To: gluster-users at gluster.org; Sam McLeod <mailinglists at smcleod.net>; Aaron Roberts <aroberts at domicilium.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] ls performance on directories with small number of items

Also note, Sam's example is comparing apples and orchards. Feeding one person from an orchard is not as efficient as feeding one person an apple, but if you're feeding 10000 people...

Also in question with the NFS example, how long until that chown was flushed? How long until another client could see those changes? That is ignoring the biggie, what happens when the NFS server goes down?
On November 27, 2017 2:49:23 PM PST, Sam McLeod <mailinglists at smcleod.net> wrote:
Hi Aaron,

We also find that Gluster is perhaps, not the most performant when performing actions on directories containing large numbers of files.
For example, with a single NFS server on the client side a recursive chown on (many!) files took about 18 seconds, our simple two replica gluster servers took over 15 minutes.
Having said that, while I'm new to the gluster world, things seem to be progressing quite quickly in regards to attempts to improve performance.

I noticed you're running a _very_ old version of Gluster, I'd first suggest upgrading to the latest stable (3.12.x) and FYI 3.13 is to be release shortly.

I'd also recommend ensuring the following setting is enabled:

performance.stat-prefetch

Further to this, additional information about the cluster / volume typology and configuration would help others assist you (but I still think you should upgrade!).

--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod


On 28 Nov 2017, at 12:18 am, Aaron Roberts <aroberts at domicilium.com<mailto:aroberts at domicilium.com>> wrote:

Hi,
               I have a situation where an apache web server is trying to locate the IndexDocument for a directory on a gluster volume.  This URL is being hit roughly 20 times per second.  There is only 1 file in this directory.  However, the parent directory does have a large number of items (+123,000 files and dirs) and we are performing operations to move these files into 2 levels of subdirs.

We are seeing very slow response times (around 8 seconds) in apache and also when trying to ls on this dir.  Before we started the migrations to move files on the large parent dir into 2 sub levels, we weren’t aware of a problem.

[root at web-02 images]# time ls -l dir1/get/ | wc -l
2

real    0m8.114s
user    0m0.002s
sys     0m0.014s

Other directories with only 1 item return very quickly (<1 sec).

[root at Web-01 images]# time ls -l dir1/tmp1/ | wc -l
2

real    0m0.014s
user    0m0.003s
sys     0m0.006s

I’m just trying to understand what would slow down this operation so much.  Is it the high frequency of attempts to read the directory (apache hits to dir1/get/) ?  Do the move operations on items in the parent directory have any impact?

Some background info:

[root at web-02 images]# gluster --version
glusterfs 3.7.20 built on Jan 30 2017 15:39:29
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com<http://www.gluster.com/>>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

[root at web-02 images]# gluster vol info

Volume Name: web_vol1
Type: Replicate
Volume ID: 0d63de20-c9c2-4931-b4a3-6aed5ae28057
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: web-01:/export/brick1/web_vol1_brick1
Brick2: web-02:/export/brick1/web_vol1_brick1
Options Reconfigured:
performance.readdir-ahead: on
performance.io<http://performance.io/>-thread-count: 32
performance.cache-size: 512MB


Any insight would be gratefully received.

Thanks,
               Aaron

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users


--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171129/7d809bb3/attachment.html>


More information about the Gluster-users mailing list