[Gluster-users] Why so bad performance?
Kirby Zhou
kirbyzhou at sohu-rd.com
Fri Dec 5 16:30:41 UTC 2008
I have tested using scp:
[@123.25 /]# scp /opt/xxx 10.10.123.22:/opt/
xxx 100% 256MB 51.2MB/s 00:05
[@123.25 /]# dd if=/opt/xxx of=/mnt/xxx bs=2M
128+0 records in
128+0 records out
268435456 bytes (268 MB) copied, 23.0106 seconds, 11.7 MB/s
So, you can see how slow the speed my gluster.
I wanna what can I do to improve the performance.
-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of RedShift
Sent: Friday, December 05, 2008 11:45 PM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] Why so bad performance?
Hello Kirby,
Please check if every involved device is running at gigabit speed and test
with at least 100 mb of data.
Glenn
Kirby Zhou wrote:
> I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got
> so bad a benchmark.
> Is there any thing can I tune?
>
> [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M
> count=2 ; done
> 2+0 records in
> 2+0 records out
> 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s
> 2+0 records in
> 2+0 records out
> 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s
> ...
>
> [@123.21 glusterfs]# cat glusterfs-server.vol
> volume brick1
> type storage/posix
> option directory /exports/disk1
> end-volume
>
> volume brick2
> type storage/posix
> option directory /exports/disk2
> end-volume
>
> volume brick-ns
> type storage/posix
> option directory /exports/ns
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
> type protocol/server
> option transport-type tcp/server # For TCP/IP transport
> subvolumes brick1 brick2 brick-ns
> option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume
> option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume
> option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume
> end-volume
>
> [@123.21 glusterfs]# cat glusterfs-client.vol
> volume remote-brick1_1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.21
> option remote-subvolume brick1
> end-volume
>
> volume remote-brick1_2
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.21
> option remote-subvolume brick2
> end-volume
>
> volume remote-brick2_1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.22
> option remote-subvolume brick1
> end-volume
>
> volume remote-brick2_2
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.22
> option remote-subvolume brick2
> end-volume
>
> volume brick-afr1_2
> type cluster/afr
> subvolumes remote-brick1_1 remote-brick2_2
> end-volume
>
> volume brick-afr2_1
> type cluster/afr
> subvolumes remote-brick1_2 remote-brick2_1
> end-volume
>
> volume remote-ns1
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.21
> option remote-subvolume brick-ns
> end-volume
>
> volume remote-ns2
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.10.123.22
> option remote-subvolume brick-ns
> end-volume
>
> volume ns-afr0
> type cluster/afr
> subvolumes remote-ns1 remote-ns2
> end-volume
>
> volume unify0
> type cluster/unify
> option scheduler alu
> option alu.limits.min-free-disk 10%
> option alu.order disk-usage
> option namespace ns-afr0
> subvolumes brick-afr1_2 brick-afr2_1
> end-volume
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list