[Bugs] [Bug 1200457] gluster performance issue as data is added to volume. tar extraction of files goes from 1-minute on empty volume to 20-minutes on volume with 40TB.

bugzilla at redhat.com bugzilla at redhat.com
Mon Mar 30 17:09:27 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1200457



--- Comment #9 from Shyamsundar <srangana at redhat.com> ---
Further observations and info
-----------------------------

A) Replicated the test on David's (reporter) setup with io-stats xlator on top
of the posix xlator and below server/xlator (as usual), and observed almost all
of the latency from the posix xlator mkdir and create FOPs.

This points to the slowness in POSIX xlator or below it.

David had also run a tar extraction directly on the brick, and that was fast
(10 seconds). So there is no real slowness on XFS directly.

Current suspicion is on the link creation in the flat .glusterfs namespace, as
that would have accumulated a lot of files (the cu setup holds about 50TB of
data).

B) Attempting an pure XFS test for a workflow that performs like gluster, i.e
create file or directory and hard/soft link it within a flat .glusterfs name
space to see, if a relatively filled up directory causes slowness

C) Also requested David for an strace from the brick processes to see latency
deviations between the same, in the fast and slow case

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=rGa36YdWAV&a=cc_unsubscribe


More information about the Bugs mailing list