[Bugs] [Bug 1200457] gluster performance issue as data is added to volume. tar extraction of files goes from 1-minute on empty volume to 20-minutes on volume with 40TB.

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 18 18:44:52 UTC 2015


--- Comment #3 from Shyamsundar <srangana at redhat.com> ---
Updating some analysis done till date:
See mail thread:

1) Volumes classified as slow (i.e with a lot of pre-existing data) and fast
(new volumes carved from the same backend file system that slow bricks are on,
with little or no data)

2) We ran an strace of tar and also collected io-stats outputs from these
volumes, both show that create and mkdir is slower on slow as compared to the
fast volume. This seems to be the overall reason for slowness

3) The tarball extraction is to a new directory on the gluster mount, so all
lookups etc. happen within this new name space on the volume

4) Checked memory footprints of the slow bricks and fast bricks etc. nothing
untoward noticed there

5) Restarted the slow volume, just as a test case to do things from scratch, no
improvement in performance (this was on David's setup).


My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB
and it took the tar extraction from 1-minute to 7-minutes. 

My suspicion is that it is related to number of files and not necessarily file
size. Shyam is looking into reproducing this behavior on a redhat system. 


On 02/12/2015 11:18 AM, David F. Robinson wrote:
> Shyam,
> You asked me to stop/start the slow volume to see if it fixed the timing
> issue.  I stopped/started homegfs_backup (the production volume with 40+
> TB) and it didn't make it faster.  I didn't stop/start the fast volume
> to see if it made it slower.  I just did  that and sent out an email.  I
> saw a similar result as Pranith. 


You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=XLoD4RTZ7k&a=cc_unsubscribe

More information about the Bugs mailing list