[Bugs] [Bug 1200457] gluster performance issue as data is added to volume. tar extraction of files goes from 1-minute on empty volume to 20-minutes on volume with 40TB.

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 25 13:51:13 UTC 2015


--- Comment #7 from Shyamsundar <srangana at redhat.com> ---
Update (dev):

With the change in comment #5 ran a series of tests to see if it improves the
performance. Here are the current observations from the same.

1) Tried reproducing the issue in order to test the fix in a couple of setups,
but issue of old volume always being slow is not observed always, so unsure of
current ability to reproduce the issue.

2) In cases where the issue got reproduced, the behavior was as follows,
   - Have a large amount of data on the volume
   - Fill up various internal caches, using a listing of all files in the
volume and also cat'ing the contents
   - First untar run post this is slow, the next run onward things catch up and
are fine (these were on a pure distribute volume)
   - When the untar is slow, the mkdir and create are the FOPs that show the
slowness, as in the investigation on the bug report
   - The slowness is almost completely due to the brick processes and the
client stack does not seem to be contributing anything (io-stats output based).
   - With the prune fix the situation does not improve

3) Currently continuing to debug on the setup possible causes for this first
run to slow down, as the slowness reason seems to be in alignment with what is
initially reported and analyzed

You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=Rtc7wraReR&a=cc_unsubscribe

More information about the Bugs mailing list