[Gluster-users] GlusterFS Benchmarks

Pavan tcp at gluster.com
Wed May 4 07:44:39 UTC 2011


On Wednesday 04 May 2011 12:44 PM, Aleksanyan, Aleksandr wrote:
> I test GlusterFS on this equipment:
>
> Backend LSI 7000, 80Tb, 24LUN's
> 4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAM

Can you please clarify what OSS here means?

And, please mention what your GlusterFS configuration looks like.

Pavan

> 1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM
> and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand
> I use IOR for test , and get next results:
> /install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16
> /gluster/C/IOR -F -k -b10G -t1m
> IOR-2.10.3: MPI Coordinated Test of Parallel I/O
> Run began: Tue Oct 19 09:27:03 2010
> Command line used: /gluster/C/IOR -F -k -b10G -t1m
> Machine: Linux node1
> Summary:
> api = POSIX
> test filename = testFile
> access = file-per-process
> ordering in a file = sequential offsets
> ordering inter file= no tasks offsets
> clients = 16 (1 per node)
> repetitions = 1
> xfersize = 1 MiB
> blocksize = 10 GiB
> aggregate filesize = 160 GiB
> Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs)
> Mean (OPs) Std Dev Mean (s)
> --------- --------- --------- ---------- ------- --------- ---------
> ---------- ------- --------
> write *1720.80* 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00
> 95.21174 EXCEL
> read *1415.64* 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00
> 115.73604 EXCEL
> Max Write: 1720.80 MiB/sec (1804.39 MB/sec)
> Max Read: 1415.64 MiB/sec (1484.40 MB/sec)
> Run finished: Tue Oct 19 09:30:34 2010
> Why *read *< *write* ? It's normal for GlusterFS ?
> best regards
> Aleksandr
> С уважением,
> Алексанян Александр
> ОАО "Т-Платформы"
> Тел: +7(495)744-0980 (1434)
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list