[Gluster-users] HA: GlusterFS Benchmarks

Aleksanyan, Aleksandr Aleksandr.Aleksanyan at t-platforms.ru
Wed May 4 08:08:21 UTC 2011


GlusterFS configuration topology, please look in attached file.

Server conf:
2x Intel Xeon 5570, 12Gb RAM

Client Conf:
2x Intel Xeon 5670, 12Gb RAM

gluster volume info:

Volume Name: gluster
Type: Distribute
Status: Started
Number of Bricks: 16
Transport-type: rdma
Bricks:
Brick1: oss1:/mnt/ost1
Brick2: oss1:/mnt/ost2
Brick3: oss1:/mnt/ost3
Brick4: oss1:/mnt/ost4
Brick5: oss2:/mnt/ost1
Brick6: oss2:/mnt/ost2
Brick7: oss2:/mnt/ost3
Brick8: oss2:/mnt/ost4
Brick9: oss3:/mnt/ost1
Brick10: oss3:/mnt/ost2
Brick11: oss3:/mnt/ost3
Brick12: oss3:/mnt/ost4
Brick13: oss4:/mnt/ost1
Brick14: oss4:/mnt/ost2
Brick15: oss4:/mnt/ost3
Brick16: oss4:/mnt/ost4



С уважением,
  Алексанян Александр
  ОАО "Т-Платформы"
  Тел: +7(495)744-0980 (1434)
________________________________________
От: Pavan [tcp at gluster.com]
Отправлено: 4 мая 2011 г. 11:44
Кому: Aleksanyan,  Aleksandr
Копия: gluster-users at gluster.org
Тема: Re: [Gluster-users] GlusterFS Benchmarks

On Wednesday 04 May 2011 12:44 PM, Aleksanyan, Aleksandr wrote:
> I test GlusterFS on this equipment:
>
> Backend LSI 7000, 80Tb, 24LUN's
> 4 OSS, Intel based server, connect to LSI via 8Gb FiberChanel, 12Gb RAM

Can you please clarify what OSS here means?

And, please mention what your GlusterFS configuration looks like.

Pavan

> 1 Intel based main server, connect to OSS via QDR InfiniBand, 12Gb RAM
> and 16 Load Generators with 2 Xeon X5670, on board, 12Gb RAM. QDR InfiniBand
> I use IOR for test , and get next results:
> /install/mpi/bin/mpirun --hostfile /gluster/C/nodes_1p -np 16
> /gluster/C/IOR -F -k -b10G -t1m
> IOR-2.10.3: MPI Coordinated Test of Parallel I/O
> Run began: Tue Oct 19 09:27:03 2010
> Command line used: /gluster/C/IOR -F -k -b10G -t1m
> Machine: Linux node1
> Summary:
> api = POSIX
> test filename = testFile
> access = file-per-process
> ordering in a file = sequential offsets
> ordering inter file= no tasks offsets
> clients = 16 (1 per node)
> repetitions = 1
> xfersize = 1 MiB
> blocksize = 10 GiB
> aggregate filesize = 160 GiB
> Operation Max (MiB) Min (MiB) Mean (MiB) Std Dev Max (OPs) Min (OPs)
> Mean (OPs) Std Dev Mean (s)
> --------- --------- --------- ---------- ------- --------- ---------
> ---------- ------- --------
> write *1720.80* 1720.80 1720.80 0.00 1720.80 1720.80 1720.80 0.00
> 95.21174 EXCEL
> read *1415.64* 1415.64 1415.64 0.00 1415.64 1415.64 1415.64 0.00
> 115.73604 EXCEL
> Max Write: 1720.80 MiB/sec (1804.39 MB/sec)
> Max Read: 1415.64 MiB/sec (1484.40 MB/sec)
> Run finished: Tue Oct 19 09:30:34 2010
> Why *read *< *write* ? It's normal for GlusterFS ?
> best regards
> Aleksandr
> С уважением,
> Алексанян Александр
> ОАО "Т-Платформы"
> Тел: +7(495)744-0980 (1434)
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GlusterConf.png
Type: image/x-png
Size: 63722 bytes
Desc: GlusterConf.png
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110504/22370142/attachment.bin>


More information about the Gluster-users mailing list