[Gluster-users] stripped volume in 3.4.0qa5 with horrible read performance

samuel samu60 at gmail.com
Mon Dec 17 08:46:28 UTC 2012


Dear folks,

I've been tried to use replicated stripped volumes with 3.3. unsuccessfully
due to https://bugzilla.redhat.com/show_bug.cgi?id=861423 and I then
proceed to try 3.4.0qa5. I then find out that the bug was solved and I
could use replicated stripped volume with the new version. Amazingly, write
performance was quite astonishing.

The problem I'm facing now is in the read process: It's horribly slow. When
I open a file to edit using the gluster native client, it takes a few
seconds and sometimes I got an error refering to file has been modified
while I was editing it. There's a ruby application reading the files and I
got continuously timeout errors.

I'm using 4 bricks with Centos 6.3 with the following structure:
Type: Striped-Replicate
Volume ID: 23dbb8dd-5cb3-4c71-9702-7c16ee9a3b3b
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.0.51.31:/gfs
Brick2: 10.0.51.32:/gfs
Brick3: 10.0.51.33:/gfs
Brick4: 10.0.51.34:/gfs
Options Reconfigured:
performance.quick-read: on
performance.io-thread-count: 32
performance.cache-max-file-size: 128MB
performance.cache-size: 256MB
performance.io-cache: on
cluster.stripe-block-size: 2MB
nfs.disable: on

I started profiling and found out one node with absurd latency figures. I
stopped the node and the problem moved to another brick:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
Fop
 ---------   -----------   -----------   -----------   ------------
----
99.94  551292.41 us      10.00 us 1996709.00 us            361    FINODELK

Could anyone provide some information how to debug this problem? Currently
the volume is not usable due to the horrible delay.

Thank you very much in advance,
Samuel.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121217/cd4ed470/attachment.html>


More information about the Gluster-users mailing list