[Gluster-devel] Infiniband throughput
Anand Avati
avati at zresearch.com
Sun Oct 21 06:45:23 UTC 2007
Nathan,
please post your performance numbers on a point-to-point glusterfs setup
(non replicated). We can tune it further from there. Your setup has
replication on the server side and the throughput you achieved is
considering the replication.
thanks,
avati
2007/10/21, Nathan Allen Stratton <nathan at robotics.net>:
>
>
> Anyone know idea why the wiki does not have the config files used for
> client and server when benchmarks are run?
>
> I am only getting 170 MB/s over 2 CPU quad 2.33 GHz Zeon boxes with no
> load. My local storage in each box is RAID 6 over 8 disks on 3ware 9650SE
> cards (PCI express). Connectivity is single port 4x Mellanox MT23108 PCIX
> cards. If I write to the RAID card I get 239 MB/s.
>
> My Configs:
> http://share.robotics.net/server_vs0.vol
> http://share.robotics.net/server_vs1.vol
> http://share.robotics.net/server_vs2.vol
> http://share.robotics.net/client.vol (same on every box)
>
> Write to gluster share:
> root at vs1.voilaip.net# time dd if=/dev/zero of=./8gbfile bs=512k
> count=16384
> 16384+0 records in
> 16384+0 records out
> 8589934592 bytes (8.6 GB) copied, 50.5589 seconds, 170 MB/s
>
> real 0m50.837s
> user 0m0.012s
> sys 0m22.341s
>
> Write to RAID directory:
> root at vs1.voilaip.net# time dd if=/dev/zero of=./8gbfile bs=512k
> count=16384
> 16384+0 records in
> 16384+0 records out
> 8589934592 bytes (8.6 GB) copied, 35.8675 seconds, 239 MB/s
>
> real 0m35.893s
> user 0m0.024s
> sys 0m20.789s
>
> At first I thought it may be because my Infiniband cards are PCIX vs PCIe,
> but when I test that I get 477 MB/s:
>
> root at vs1.voilaip.net# ib_rdma_bw 192.168.0.12
> 12415: | port=18515 | ib_port=1 | size=65536 | tx_depth=100 | iters=1000 |
> duplex=0 | cma=0 |
> 12415: Local address: LID 0x03, QPN 0x10408, PSN 0x689d5f RKey 0x5f4c00bd
> VAddr 0x002aaaab2f5000
> 12415: Remote address: LID 0x04, QPN 0x5f040a, PSN 0xce42f6, RKey
> 0x14fa00c0 VAddr 0x002aaaab2f5000
>
>
> 12415: Bandwidth peak (#0 to #954): 477.292 MB/sec
> 12415: Bandwidth average: 477.2 MB/sec
> 12415: Service Demand peak (#0 to #954): 4774 cycles/KB
> 12415: Service Demand Avg : 4775 cycles/KB
>
>
> -Nathan
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
--
It always takes longer than you expect, even when you take into account
Hofstadter's Law.
-- Hofstadter's Law
More information about the Gluster-devel
mailing list