[Bugs] [Bug 1377193] Poor smallfile read performance on Arbiter volume compared to Replica 3 volume

bugzilla at redhat.com bugzilla at redhat.com
Thu Sep 22 09:55:55 UTC 2016


--- Comment #3 from Ravishankar N <ravishankar at redhat.com> ---
I was able to get similar results on my testing where the 'files/sec' was
almost half for a 1x (2+1) setup when compared to a 1x3 setup for 256KB write
size. A summary of the cumulative brick profile info on one such run is given
below for some FOPS:

Replica 3 vol
             No of calls:        
    Brick1    Brick2    Brick3
Lookup    28,544    28,545    28,552
Read    17,695    17,507    17,228
FSTAT    17,714    17,535    17,247
Inodelk    8    8    8

Arbiter vol
    No. of calls:
    Brick1    Brick2    Arbiter brick
Lookup    56,241    56,246    56,245
Read    34,920    17,508    -
FSTAT    34,995    17,533    -
Inodelk    52,442    52,442    52,442

I see that the sum total of the reads on all bricks is similar for both replica
and arbiter setups. In arbiter vol, zero reads are served from arbiter brick
and so the read load is spread between 1st 2 bricks. Likewise for Fstat.

But the problem seems to be in the number of lookups. For arbiter volume, the
number seems to be double than replica-3. I'm guessing this is what is slowing
things down. I also see a lot of Inodelks for the arbiter volume, which is
unexpected because the I/O was a read operation. I need to figure out why these
2 things are happening.

You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=2hTr7SXb7m&a=cc_unsubscribe

More information about the Bugs mailing list