[Gluster-devel] io-cache translator does not work for multi-bricks

huangql huangql at ihep.ac.cn
Wed Oct 19 01:03:27 UTC 2011


Dear all,


Recenly, I have installed the gLuster 3.2.2 source and do much IO performance on gLuster by tuning the cache-size and cache-timeout. And I found the cache-size doesn't work for multi bricks whick is strange and stuck me. 
For example, volume info:
Volume Name: strip-vol
Type: Stripe
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: cloud04:/web-data01/data02   //multi bricks
Brick2: cloud04:/web-data01/data03
Options Reconfigured:
performance.cache-refresh-timeout: 60
performance.cache-size: 1024000000

test scripts:  
for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done 

test results:

First test:
Description: distributed volume with multi bricks

[root at cloud04 testfs]# for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.107977 seconds, 89 MB/s
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.108642 seconds, 85 MB/s
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.10802 seconds, 90 MB/s

>From the dd test doing in 1Gb/s Ethenet, I got the performance shows read a same files which should have cached  rather than got them through network.

Second test:

Description: distributed volume with only one brick in a single server
Volume Name: test-vol
Type: Stripe
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: cloud04:/web-data01/data02   //single brick
Options Reconfigured:
performance.cache-refresh-timeout: 60
performance.cache-size: 1024000000

[root at cloud04 testfs]# for((i=0;i<3;i++)); do dd if=f20M of=/dev/zero bs=1M; sleep 2;done
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.107977 seconds, 89 MB/s
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.108642 seconds, 899 MB/s
20+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 0.10802 seconds, 897 MB/s


The result shows it read the cached file which is reasonable.


Could you know why I got the low performance when doing the first test with multi-bricks? Hence, I debug and trace the two volumes and found that a volume with single brick can cache the file accessed, then next time to read the same file again just read from cache not from network. But for a volume with multi-bricks, cache feature does not work.  Could you give me some configuration tips to optimize IO performance when volume with multi-bricks.                   

Thank you for your any help in advance.

Cheers,
Qiulan
2011-10-18
====================================================================
Computing center,the Institute of High Energy Physics, CAS, China
Qiulan Huang                         Tel: (+86) 10 8823 6010-105
P.O. Box 918-7                       Fax: (+86) 10 8823 6839
Beijing 100049  P.R. China           Email: Qiulan.Huang at ihep.ac.cn
===================================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20111019/4306c8dc/attachment-0003.html>


More information about the Gluster-devel mailing list