[Gluster-users] GlusterFS Benchmarks
Aleksanyan, Aleksandr
Aleksandr.Aleksanyan at t-platforms.ru
Wed May 4 07:14:42 UTC 2011
I test GlusterFS on this equipment:
Backend LSI 7000, 80Tb, 24LUN's
4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAM
1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM
and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand
I use IOR for test , and get next results:
/install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16 /gluster/C/IOR -F -k -b10G -t1m
IOR-2.10.3: MPI Coordinated Test of Parallel I/O
Run began: Tue Oct 19 09:27:03 2010
Command line used: /gluster/C/IOR -F -k -b10G -t1m
Machine: Linux node1
Summary:
api = POSIX
test filename = testFile
access = file-per-process
ordering in a file = sequential offsets
ordering inter file= no tasks offsets
clients = 16 (1 per node)
repetitions = 1
xfersize = 1 MiB
blocksize = 10 GiB
aggregate filesize = 160 GiB
Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs) Mean (OPs) Std Dev Mean (s)
--------- --------- --------- ---------- ------- --------- --------- ---------- ------- --------
write 1720.80 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00 95.21174 EXCEL
read 1415.64 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00 115.73604 EXCEL
Max Write: 1720.80 MiB/sec (1804.39 MB/sec)
Max Read: 1415.64 MiB/sec (1484.40 MB/sec)
Run finished: Tue Oct 19 09:30:34 2010
Why read < write ? It's normal for GlusterFS ?
best regards
Aleksandr
С уважением,
Алексанян Александр
ОАО "Т-Платформы"
Тел: +7(495)744-0980 (1434)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110504/6587b3d5/attachment.html>
More information about the Gluster-users
mailing list