[Gluster-users] ls performance on directories with small number of items
aroberts at domicilium.com
Wed Nov 29 10:11:01 UTC 2017
I understand roughly, that readir() and others are simply hard to solve with a distributed filesystem and that NFS can do this part faster. I’d like to see if gluster can be tweaked a bit to get this working.
performance.stat-prefetch is set to ‘on’.
Would performance.md-cache-timeout help me?
It is set to 1 on my volume (default). Would raising this help with servicing large number of hits for a single file/dir?
From: Joe Julian [mailto:joe at julianfamily.org]
Sent: 27 November 2017 23:45
To: gluster-users at gluster.org; Sam McLeod <mailinglists at smcleod.net>; Aaron Roberts <aroberts at domicilium.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] ls performance on directories with small number of items
Also note, Sam's example is comparing apples and orchards. Feeding one person from an orchard is not as efficient as feeding one person an apple, but if you're feeding 10000 people...
Also in question with the NFS example, how long until that chown was flushed? How long until another client could see those changes? That is ignoring the biggie, what happens when the NFS server goes down?
On November 27, 2017 2:49:23 PM PST, Sam McLeod <mailinglists at smcleod.net> wrote:
We also find that Gluster is perhaps, not the most performant when performing actions on directories containing large numbers of files.
For example, with a single NFS server on the client side a recursive chown on (many!) files took about 18 seconds, our simple two replica gluster servers took over 15 minutes.
Having said that, while I'm new to the gluster world, things seem to be progressing quite quickly in regards to attempts to improve performance.
I noticed you're running a _very_ old version of Gluster, I'd first suggest upgrading to the latest stable (3.12.x) and FYI 3.13 is to be release shortly.
I'd also recommend ensuring the following setting is enabled:
Further to this, additional information about the cluster / volume typology and configuration would help others assist you (but I still think you should upgrade!).
On 28 Nov 2017, at 12:18 am, Aaron Roberts <aroberts at domicilium.com<mailto:aroberts at domicilium.com>> wrote:
I have a situation where an apache web server is trying to locate the IndexDocument for a directory on a gluster volume. This URL is being hit roughly 20 times per second. There is only 1 file in this directory. However, the parent directory does have a large number of items (+123,000 files and dirs) and we are performing operations to move these files into 2 levels of subdirs.
We are seeing very slow response times (around 8 seconds) in apache and also when trying to ls on this dir. Before we started the migrations to move files on the large parent dir into 2 sub levels, we weren’t aware of a problem.
[root at web-02 images]# time ls -l dir1/get/ | wc -l
Other directories with only 1 item return very quickly (<1 sec).
[root at Web-01 images]# time ls -l dir1/tmp1/ | wc -l
I’m just trying to understand what would slow down this operation so much. Is it the high frequency of attempts to read the directory (apache hits to dir1/get/) ? Do the move operations on items in the parent directory have any impact?
Some background info:
[root at web-02 images]# gluster --version
glusterfs 3.7.20 built on Jan 30 2017 15:39:29
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com<http://www.gluster.com/>>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root at web-02 images]# gluster vol info
Volume Name: web_vol1
Volume ID: 0d63de20-c9c2-4931-b4a3-6aed5ae28057
Number of Bricks: 1 x 2 = 2
Any insight would be gratefully received.
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users