[Gluster-users] Why so bad performance?
Kirby Zhou
kirbyzhou at sohu-rd.com
Fri Dec 5 18:00:14 UTC 2008
OK, They are HP 380 G4 rack server boxes.
P4 Xeon 3.0GHz*2 with Hyper-Thread, 4G RAM.
scsi 72g*5 raid5 for /exports/ns, sata 250G*7*2 raid5 for /exports/disk1 and
/exports/disk2.
All of the 4 boxes are connected with gigabit Ethernet switcher.
10.10.123.21 and 10.10.123.22 are servers, while 10.10.123.25 and
10.10.65.64 are clients.
OS is RHEL-5.2/x86_64, kerner-2.6.18-92el5.
And important rpms are:
fuse.x86_64-2.7.4-1.el5.rf from dag.wieers.com
dkms-fuse.noarch-2.7.4-1.nodist.rf from dag.wieers.com
dkms.noarch-2.0.20.4-1.el5.rf from dag.wieers.com
glusterfs.x86_64-1.3.10-1 from glusterfs.org
I have already post my COMPLETE spec file in the very beginning post. The 2
servers use the same spec file.
[@123.21 /]# glusterfsd -f /etc/glusterfs/ glusterfs-server.vol
[@123.21 /]# glusterfsd -f /etc/glusterfs/ glusterfs-server.vol
[@123.25 /]# glusterfs -s 10.10.123.21 /mnt
[@65.64/]# glusterfs -s 10.10.123.21 /mnt
I have tried run 'dd' concurrently on the 2 client boxes. The sum of 2
client's speed is only 13MB/s, a bit more than solo.
-----Original Message-----
From: a s p a s i a [mailto:aspasia.sf at gmail.com]
Sent: Saturday, December 06, 2008 1:43 AM
To: Kirby Zhou
Cc: RedShift; gluster-users at gluster.org
Subject: Re: [Gluster-users] Why so bad performance?
interesting ...
did you mention what is your HW for the glusterFS server side? can
you post complete specs of your config?
- HW - proc speed/type, RAM, etc. - where you have installed both
GlusterFS server and client?
- a.
On Fri, Dec 5, 2008 at 8:30 AM, Kirby Zhou <kirbyzhou at sohu-rd.com> wrote:
> I have tested using scp:
>
> [@123.25 /]# scp /opt/xxx 10.10.123.22:/opt/
> xxx 100% 256MB 51.2MB/s 00:05
>
> [@123.25 /]# dd if=/opt/xxx of=/mnt/xxx bs=2M
> 128+0 records in
> 128+0 records out
> 268435456 bytes (268 MB) copied, 23.0106 seconds, 11.7 MB/s
>
> So, you can see how slow the speed my gluster.
> I wanna what can I do to improve the performance.
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of RedShift
> Sent: Friday, December 05, 2008 11:45 PM
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Why so bad performance?
>
> Hello Kirby,
>
>
> Please check if every involved device is running at gigabit speed and test
> with at least 100 mb of data.
>
>
> Glenn
>
> Kirby Zhou wrote:
>> I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but
got
>> so bad a benchmark.
>> Is there any thing can I tune?
>>
>> [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M
>> count=2 ; done
>> 2+0 records in
>> 2+0 records out
>> 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s
>> 2+0 records in
>> 2+0 records out
>> 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s
>> ...
>>
>> [@123.21 glusterfs]# cat glusterfs-server.vol
>> volume brick1
>> type storage/posix
>> option directory /exports/disk1
>> end-volume
>>
>> volume brick2
>> type storage/posix
>> option directory /exports/disk2
>> end-volume
>>
>> volume brick-ns
>> type storage/posix
>> option directory /exports/ns
>> end-volume
>>
>> ### Add network serving capability to above brick.
>> volume server
>> type protocol/server
>> option transport-type tcp/server # For TCP/IP transport
>> subvolumes brick1 brick2 brick-ns
>> option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume
>> option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume
>> option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns"
volume
>> end-volume
>>
>> [@123.21 glusterfs]# cat glusterfs-client.vol
>> volume remote-brick1_1
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.21
>> option remote-subvolume brick1
>> end-volume
>>
>> volume remote-brick1_2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.21
>> option remote-subvolume brick2
>> end-volume
>>
>> volume remote-brick2_1
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.22
>> option remote-subvolume brick1
>> end-volume
>>
>> volume remote-brick2_2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.22
>> option remote-subvolume brick2
>> end-volume
>>
>> volume brick-afr1_2
>> type cluster/afr
>> subvolumes remote-brick1_1 remote-brick2_2
>> end-volume
>>
>> volume brick-afr2_1
>> type cluster/afr
>> subvolumes remote-brick1_2 remote-brick2_1
>> end-volume
>>
>> volume remote-ns1
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.21
>> option remote-subvolume brick-ns
>> end-volume
>>
>> volume remote-ns2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 10.10.123.22
>> option remote-subvolume brick-ns
>> end-volume
>>
>> volume ns-afr0
>> type cluster/afr
>> subvolumes remote-ns1 remote-ns2
>> end-volume
>>
>> volume unify0
>> type cluster/unify
>> option scheduler alu
>> option alu.limits.min-free-disk 10%
>> option alu.order disk-usage
>> option namespace ns-afr0
>> subvolumes brick-afr1_2 brick-afr2_1
>> end-volume
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
--
A S P A S I A
. . . . . . . . . . ..
More information about the Gluster-users
mailing list