[Gluster-users] probable cause of read performance issue

John Mark Walker johnmark at redhat.com
Mon Dec 17 15:18:07 UTC 2012

Thanks for sending to the list. I'm not sure what the solution is, since most people need R+W. For 3.4, there are some features around the use case of WORM (write once, read many) which might improve performance for the use case you're testing. 

Could you perform the same test with the qa5 release from last week? 


----- Original Message -----

> hi, all:
> When I was testing glusterfs performance by iozone with only ONE
> machine, I got the following chart:

> https://dl.dropbox.com/u/33453649/perf_htm_m19da4047.jpg

> Y axis represent disk throughput in kb/s. X axis represent test
> cases.

> It is obvious that R+W case has the worst read performance, R+W means
> you turn on both Read-ahead xlator and Write-behind xlator.
> Unfortunately, that`s the default configuration of glusterfs.

> Though I run glusterfs only on one machine, I think this result
> explains something. Maybe you have more insight on it.

> More info:
> -------------------------
> cmdline: ./iozone -a -n 16g -g 16g -i 0 -i 1 -i 2 -i 3 -f /mnt/iozone
> -Rb no-rh.xls -y 4096k
> CPU: intel i3-2120 X 4
> RAM: 8G
> Glusterfs Version: 3.3
> Volume info:
> Volume Name: abc
> Type: Distribute
> Volume ID: 6e0104f1-d32a-4ed4-b011-d29ccf27abe1
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1:
> Options Reconfigured:
> performance.io-cache: off
> performance.quick-read: off
> performance.read-ahead: on
> performance.write-behind: on
> --------------------------------------------

> Best Regards.
> Jules Wang.

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121217/7f3ef22b/attachment.html>

More information about the Gluster-users mailing list