[Gluster-devel] Glusterfs setup+benchmark (slow?)
Fredrik Steen
fredrik+list.gluster at southpole.se
Thu May 29 06:25:04 UTC 2008
Hello,
Trying to get som idea of the performance of a glusterfs setup.
I'm not very happy with the numbers I have got from my tests.
I would like to check with you folks and see if my setup/numbers
looks correct...
The setup consists of 3 glusterfs servers using 1G RAM
(boot param mem=1024M)
Hw setup (3 server 1 client):
------------------------------
- 2 x Intel(R) Xeon(R) CPU E5420 @ 2.50GHz (8 cores)
- Intel Corporation 80003ES2LAN Gigabit Ethernet Controller
- ATA-7: ST380815AS, 3.AAD, max UDMA/133
Versions used:
---------------
glusterfs: glusterfs-1.3.8pre5
fuse: fuse-2.7.2glfs9
CentOS: 5
Local disk timings (hdparm -Tt):
--------------------------------
Timing cached reads: 24636 MB in 1.99 seconds = 12353.07 MB/sec
Network Throughput:
-------------------
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. MBytes/sec
87380 16384 16384 180.03 112.23
Server config:
------------------------------------------------------
volume brick
type storage/posix
option directory /data/export
end-volume
volume brick-ns
type storage/posix
option directory /data/export-ns
end-volume
volume server
type protocol/server
option transport-type tcp/server
option auth.ip.brick.allow *
option auth.ip.brick-ns.allow *
subvolumes brick brick-ns
end-volume
------------------------------------------------------
Client config (unify):
------------------------------------------------------
volume remote1
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.164
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.197
option remote-subvolume brick
end-volume
volume remote3
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.46
option remote-subvolume brick
end-volume
volume remote-ns
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.164
option remote-subvolume brick-ns
end-volume
volume unify0
type cluster/unify
option scheduler rr
option namespace remote-ns
subvolumes remote1 remote2 remote3
end-volume
------------------------------------------------------
Client config (stripe):
------------------------------------------------------
volume remote1
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.164
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.197
option remote-subvolume brick
end-volume
volume remote3
type protocol/client
option transport-type tcp/client
option remote-host 10.10.10.46
option remote-subvolume brick
end-volume
volume stripe
type cluster/stripe
option block-size *:1MB
subvolumes remote1 remote2 remote3
end-volume
------------------------------------------------------
These simple tests was made with dd and in the case
with 3 concurrent processes first 3 concurrent write
and then 3 concurrent write.
Test #1 (1 client 3 servers 1 write process) Stripe
------------------------------------------------------
Write test:
* 10485760000 bytes (10 GB) copied, 128.899 seconds, 81.3 MB/s
* 10485760000 bytes (10 GB) copied, 128.435 seconds, 81.6 MB/s
* 10485760000 bytes (10 GB) copied, 134.973 seconds, 77.7 MB/s
Read test:
* 10485760000 bytes (10 GB) copied, 530.27 seconds, 19.8 MB/s
* 10485760000 bytes (10 GB) copied, 455.247 seconds, 23.0 MB/s
* 10485760000 bytes (10 GB) copied, 449.548 seconds, 23.3 MB/s
Test #2 (1 client 3 servers 3 concurrent processes) Stripe
------------------------------------------------------
Process 1:
----------
Write:
* 5242880000 bytes (5.2 GB) copied, 584.393 seconds, 9.0 MB/s
* 5242880000 bytes (5.2 GB) copied, 583.894 seconds, 9.0 MB/s
* 5242880000 bytes (5.2 GB) copied, 588.164 seconds, 8.9 MB/s
Read:
* 5242880000 bytes (5.2 GB) copied, 386.279 seconds, 13.6 MB/s
* 5242880000 bytes (5.2 GB) copied, 385.255 seconds, 13.6 MB/s
* 5242880000 bytes (5.2 GB) copied, 386.346 seconds, 13.6 MB/s
Process 2:
----------
Write
* 5242880000 bytes (5.2 GB) copied, 587.611 seconds, 8.9 MB/s
* 5242880000 bytes (5.2 GB) copied, 589.912 seconds, 8.9 MB/s
* 5242880000 bytes (5.2 GB) copied, 605.053 seconds, 8.7 MB/s :
Read:
* 5242880000 bytes (5.2 GB) copied, 411.217 seconds, 12.7 MB/s
* 5242880000 bytes (5.2 GB) copied, 386.303 seconds, 13.6 MB/s
* 5242880000 bytes (5.2 GB) copied, 386.303 seconds, 13.6 MB/s
Process 3:
----------
Write:
* 5242880000 bytes (5.2 GB) copied, 587.612 seconds, 8.9 MB/s
* 5242880000 bytes (5.2 GB) copied, 589.902 seconds, 8.9 MB/s
* 5242880000 bytes (5.2 GB) copied, 605.063 seconds, 8.7 MB/s
Read:
* 5242880000 bytes (5.2 GB) copied, 411.217 seconds, 12.7 MB/s
* 5242880000 bytes (5.2 GB) copied, 386.303 seconds, 13.6 MB/s
* 5242880000 bytes (5.2 GB) copied, 386.303 seconds, 13.6 MB/s
Test #3 (1 client 3 server 2 write then 2 read processes) Unify
----------------------------------------------------------------
(only two tests runs ran out of time with hw)
Process 1:
----------
Write:
* 5242880000 bytes (5.2 GB) copied, 858.661 seconds, 6.1 MB/s
* 5242880000 bytes (5.2 GB) copied, 218.973 seconds, 23.9 MB/s
Read:
* 5242880000 bytes (5.2 GB) copied, 447.404 seconds, 11.7 MB/s
* 5242880000 bytes (5.2 GB) copied, 432.071 seconds, 12.1 MB/s
Process 2:
----------
Write:
* 5242880000 bytes (5.2 GB) copied, 214.628 seconds, 24.4 MB/s
* 5242880000 bytes (5.2 GB) copied, 483.334 seconds, 10.8 MB/s
Read:
* 5242880000 bytes (5.2 GB) copied, 320.471 seconds, 16.4 MB/s
* 5242880000 bytes (5.2 GB) copied, 662.589 seconds, 7.9 MB/s
Process 3:
----------
Write:
* 5242880000 bytes (5.2 GB) copied, 214.799 seconds, 24.4 MB/s
* 5242880000 bytes (5.2 GB) copied, 809.602 seconds, 6.5 MB/s
Read:
* 5242880000 bytes (5.2 GB) copied, 660.162 seconds, 7.9 MB/s
* 5242880000 bytes (5.2 GB) copied, 458.078 seconds, 11.4 MB/s
# End of tests
What do you think? Do my numbers and setup look sane?
Cheers, Fredrik Steen
--
.Fredrik Steen
Senior Linux Systems Specialist
South Pole AB, www.southpole.se
08 - 56 23 7121
More information about the Gluster-devel
mailing list