[Gluster-devel] Infiniband throughput

Nathan Allen Stratton nathan at robotics.net
Sun Oct 21 14:57:04 UTC 2007


On Sun, 21 Oct 2007, Anand Avati wrote:

> Nathan,
>  please post your performance numbers on a point-to-point glusterfs setup
> (non replicated). We can tune it further from there. Your setup has
> replication on the server side and the throughput you achieved is
> considering the replication.

Odd, slower then with AFR on, I guess that could be because with AFR it
was possible it was writing to local and network, and with AFR off it may
just be writing over network.

root at vs1.voilaip.net# time dd if=/dev/zero of=/share/8gbfile bs=512k
count=16384
16384+0 records in
16384+0 records out
8589934592 bytes (8.6 GB) copied, 58.7686 seconds, 146 MB/s

real    0m58.779s
user    0m0.000s
sys     0m0.224s

Reads are fast:
root at vs1.voilaip.net# time dd if=/share/8gbfile of=/dev/null bs=512k
count=16384
16384+0 records in
16384+0 records out
8589934592 bytes (8.6 GB) copied, 40.4185 seconds, 213 MB/s

real    0m40.460s
user    0m0.016s
sys     0m2.448s

When I turn AFR back on a write and then read a file, I get lower
throughput I would think I would get faster then without AFR since there
are two copies of the file that can be read.

root at vs1.voilaip.net# time dd if=/share/8gbfile of=/dev/null bs=512k
count=16384
16384+0 records in
16384+0 records out
8589934592 bytes (8.6 GB) copied, 41.2238 seconds, 208 MB/s

real    0m41.236s
user    0m0.004s
sys     0m2.224s


> 2007/10/21, Nathan Allen Stratton <nathan at robotics.net>:
> >
> >
> > Anyone know idea why the wiki does not have the config files used for
> > client and server when benchmarks are run?
> >
> > I am only getting 170 MB/s over 2 CPU quad 2.33 GHz Zeon boxes with no
> > load. My local storage in each box is RAID 6 over 8 disks on 3ware 9650SE
> > cards (PCI express). Connectivity is single port 4x Mellanox MT23108 PCIX
> > cards. If I write to the RAID card I get 239 MB/s.
> >
> > My Configs:
> > http://share.robotics.net/server_vs0.vol
> > http://share.robotics.net/server_vs1.vol
> > http://share.robotics.net/server_vs2.vol
> > http://share.robotics.net/client.vol (same on every box)
> >
> > Write to gluster share:
> > root at vs1.voilaip.net# time dd if=/dev/zero of=./8gbfile bs=512k
> > count=16384
> > 16384+0 records in
> > 16384+0 records out
> > 8589934592 bytes (8.6 GB) copied, 50.5589 seconds, 170 MB/s
> >
> > real    0m50.837s
> > user    0m0.012s
> > sys     0m22.341s
> >
> > Write to RAID directory:
> > root at vs1.voilaip.net# time dd if=/dev/zero of=./8gbfile bs=512k
> > count=16384
> > 16384+0 records in
> > 16384+0 records out
> > 8589934592 bytes (8.6 GB) copied, 35.8675 seconds, 239 MB/s
> >
> > real    0m35.893s
> > user    0m0.024s
> > sys     0m20.789s
> >
> > At first I thought it may be because my Infiniband cards are PCIX vs PCIe,
> > but when I test that I get 477 MB/s:
> >
> > root at vs1.voilaip.net# ib_rdma_bw 192.168.0.12
> > 12415: | port=18515 | ib_port=1 | size=65536 | tx_depth=100 | iters=1000 |
> > duplex=0 | cma=0 |
> > 12415: Local address:  LID 0x03, QPN 0x10408, PSN 0x689d5f RKey 0x5f4c00bd
> > VAddr 0x002aaaab2f5000
> > 12415: Remote address: LID 0x04, QPN 0x5f040a, PSN 0xce42f6, RKey
> > 0x14fa00c0 VAddr 0x002aaaab2f5000
> >
> >
> > 12415: Bandwidth peak (#0 to #954): 477.292 MB/sec
> > 12415: Bandwidth average: 477.2 MB/sec
> > 12415: Service Demand peak (#0 to #954): 4774 cycles/KB
> > 12415: Service Demand Avg  : 4775 cycles/KB
> >
> >
> > -Nathan
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> It always takes longer than you expect, even when you take into account
> Hofstadter's Law.
>
> -- Hofstadter's Law
>





More information about the Gluster-devel mailing list