[Bugs] [Bug 1200457] gluster performance issue as data is added to volume. tar extraction of files goes from 1-minute on empty volume to 20-minutes on volume with 40TB.

bugzilla at redhat.com bugzilla at redhat.com
Thu Mar 26 19:19:03 UTC 2015


Shyamsundar <srangana at redhat.com> changed:

           What    |Removed                     |Added
                 CC|                            |drobinson at corvidtec.com
              Flags|                            |needinfo?(drobinson at corvidt
                   |                            |ec.com)

--- Comment #8 from Shyamsundar <srangana at redhat.com> ---
Test that reproduces the issue:

1) Create ~5TB of 320KB files spread across several directories
2) Run "find . -exec cat {} \; > /dev/null" so that meta data and data is read
from the gluster volume from a FUSE mount
3) After about 1-2 hours of (2) running, stop the command
4) Now extract the said tarball on the FUSE mount, this takes approximately 2x
the time that it would take if this was done after step (1) and after
restarting the volume and remounting the same on the clients.

1) The slowness demonstrated in Step (4) is only for the first 1-2 tar
extractions, after which it comes back to the good number
2) In these slow runs the mkdir and create are the FOPs that slow down the
entire process
3) Placing io-stat on top of POSIX on the bricks, and checking time deltas
between the FOP time on POSIX->Server/slator->FUSE, it is seen that the time is
consumed in POSIX and that layer needs further investigation.

@David, would it be possible to run a similar io-stat capture at your end on
your volumes so that I can corroborate that this is a POSIX/xlator layer issue

You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=6ZoVBiNKqt&a=cc_unsubscribe

More information about the Bugs mailing list