[Gluster-users] The Replicated Volume's performance is bad

Ingam Jiao spjiao at gmail.com
Tue Jul 19 02:35:06 UTC 2011


Hi

I have setup a test environment to test GlusterFS performance.
the system under test consists of three nodes,
(1)their IP address is 10.4.0.151/8(Node1), 10.4.0.152/8(Node2),
10.4.0.153/8(Node3)
(2)Node1 hardwre
CPU:Intel Xeon E5506
Memory: 8G DDR3
Physical Volume: /dev/sdb1 480618344 202788 456001496 1% /opt/sdb1
(4HDDs with RAID 5)
(3)Node 2 Hardware
CPU: Intel Dual-Core E5200
Memory: 2G DDR3
Physical Volume:/dev/sdb1 480618344 202788 456001496 1% /opt/sdb1
(4HDDs with RAID 5)
(4)Node3 hardware
CPU: Intel Dual-Core E5200
Memory: 8G DDR3
Physical Volume: /dev/sdb1 480618344 202788 456001496 1% /opt/sdb1 (4
HDDs with RAID 5)

(5)Software
CentOS 5.5 2.6.18-238.12.1.el5 x86_64 GNU/Linux
GlusterFS-3.2.1qa3
Physical Volume FS: ext3

(6)Test tool
IOZone3_385

(7)Replicated Volume configuration and performance
Volume Name: test
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.4.0.151:/opt/sdb1
Brick2: 10.4.0.152:/opt/sdb1
Brick3: 10.4.0.153:/opt/sdb1
Options Reconfigured:
nfs.disable: on
performance.io-thread-count: 32
iozone -a -i 0 -i 2 -s 10G -Rb replica-test.xsl -N -O -q 16M -f
/mnt/glusterfs/CentOS-5.5.iso -T
	
	
	
	
	
	
	
	
	
	
	
	
	
The top row is records sizes, the left column is file sizes 	
	
	
	
	
	
	
Writer Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	44 	109 	179 	356 	720 	1436 	2930 	5759 	11410 	22933
49754 	91838 	182878
Re-writer Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	42 	83 	166 	332 	693 	1394 	2700 	5319 	10608 	21440 	45776
86004 	170773
Random Read Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	3382 	3471 	3623 	4170 	3868 	4430 	6585 	9534 	11797 	17913
31090 	56516 	115694
Random Write Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	563 	672 	1526 	1720 	2114 	2747 	4374 	7338 	12538 	23384
44771 	85929 	170237

Reader Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	32 	64 	129 	257 	550 	955 	2072 	4135 	8328 	16543 	33186
68814 	133116
Re-reader Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	32 	67 	129 	257 	541 	1048 	2065 	4139 	8277 	16779 	33216
68519 	135729



(8) Striped Volume configuration and Performace

Volume Name: test

Type: stripe

Status: Started

Number of Bricks: 3

Transport-type: tcp

Bricks:

Brick1: 10.4.0.151:/opt/sdb1

Brick2: 10.4.0.152:/opt/sdb1

Brick3: 10.4.0.153:/opt/sdb1

Options Reconfigured:

nfs.disable: on

iozone -a -i 0 -i 1 -i 2 -s 10G -Rb replicate-test.xsl -N -O -q 16M -f
/mnt/glusterfs/CentOS-5.5.iso -T	
	
	
	
	
	
	
	
	
	
	
	
	
	
	
The top row is records sizes, the left column is file sizes 	
	
	
	
	
	
	
Writer Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	362 	391 	555 	699 	1014 	1616 	2828 	5183 	10162 	20367
40854 	81182 	162133
Re-writer Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	364 	413 	574 	734 	1133 	1799 	3180 	6028 	11150 	23897
45782 	92984 	194933
Reader Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	90 	178 	361 	717 	1093 	2209 	4403 	8828 	17561 	35136
70241 	140302 	281698
Re-reader Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	91 	181 	362 	734 	1096 	2189 	4370 	8745 	17696 	35058
69909 	140101 	278609
Random Read Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	2236 	2122 	2556 	3279 	4077 	5083 	10051 	19841 	28954
46756 	80266 	150300 	289528
Random Write Report 	
	
	
	
	
	
	
	
	
	
	
	
	

	4 	8 	16 	32 	64 	128 	256 	512 	1024 	2048 	4096 	8192 	16384
10485760 	1732 	1824 	2195 	2704 	3765 	4477 	8713 	13889 	20649
36065 	60374 	124263 	221658


By comparison, the replicated volume has bad performance.

Who can tell why the relicated volume has such bad performance.

By the way, I plan to deloy GlusterFS on 10 nodes and set the replicate
to 3. Who can tell me how to configure the GlusterFS?

Thanks

Ingam Jiao




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110719/8848dca2/attachment.html>


More information about the Gluster-users mailing list