[Gluster-users] performance stops at 1Gb
P.Gotwalt
p.gotwalt at uci.ru.nl
Mon Dec 6 17:21:58 UTC 2010
Good idea with the dd. Here the results:
dd if=/dev/zero of=/gluster/file bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.53609 seconds, 113 MB/s
dd if=/gluster/file of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.93758 seconds, 108 MB/s
My configuration:
]# mount
..
glusterfs#node70:/stripevol on /gluster type fuse
(rw,allow_other,default_permissions,max_read=131072)
]# ssh node70 gluster volume info stripevol
Volume Name: stripevol
Type: Stripe
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node70.storage.surfnet.nl:/data1
Brick2: node80.storage.surfnet.nl:/data1
Brick3: node90.storage.surfnet.nl:/data1
Brick4: node100.storage.surfnet.nl:/data1
]#
To be sure my 10Gb nic is working:
]# ethtool eth2
Settings for eth2:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supports auto-negotiation: No
Advertised link modes: 10000baseT/Full
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: FIBRE
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
I used bonnie++ for my tests and used a blocksize of 32KB
Peter
Van: Anand Avati [mailto:anand.avati at gmail.com]
Verzonden: 03 December 2010 14:03
Aan: Gotwalt, P.
CC: gluster-users at gluster.org
Onderwerp: Re: [Gluster-users] performance stops at 1Gb
Do both read and write throughput peak at 1Gbit/s? What is the block size
used for performing I/O? Can you get the output of -
1. dd if=/dev/zero of=/mnt/stripe/file bs=1M count=1K
2. dd if=/mnt/stripe/file of=/dev/null bs=1M count=1K
Just one instance of dd is enough as the client network interface (10Gbit/s)
has enough juice to saturate 4x1Gbit servers.
Avati
On Fri, Dec 3, 2010 at 6:06 PM, Gotwalt, P. <peter.gotwalt at uci.ru.nl> wrote:
Craig,
Using multiple parallel bonnie++ benchmarks (4,8,16) does use several
files. These file are 1GB each, and we take care there will be at least
32 of them. As we have multiple processes (4,8,16 bonnie++s) and each
uses several files, we spread the io over different storage bricks. I
can see this when monitoring network and disk activity on the bricks.
For example: when bonnie++ does block read/writes on a striped (4
bricks) volume I notice that the load of the client (network throughput)
is evenly spread over the 4 nodes. These nodes have enough cpu, memory,
network and disk resources left! The accoumulated throughput doesn't get
over the 1 Gb.
The 10Gb nic at the client is set to fixed 10Gb, full duplex, All the
nics on the storage bricks are 1Gb, fixed, full duplex. The 10Gb client
(dual quadcore, 16GB) has plenty of resources to run 16 bonnie++s
parallel. We should be able to get more than this 1Gb throughput,
especially with a striped volume.
What kind of benchmarks do you run? And with what kind of setup?
Peter
> Peter -
> Using Gluster the performance of any single file is going to be
> limited to the performance of the server on which it exists, or in the
> case of a striped volume of the server on which the segment of the
file
> you are accessing exists. If you were able to start 4 processes,
> accessing different parts of the striped file, or lots of different
> files in a distribute cluster you would see your performance increase
> significantly.
> Thanks,
> Craig
> -->
> Craig Carl
> Senior Systems Engineer
> Gluster
>
>
> On 11/26/2010 07:57 AM, Gotwalt, P. wrote:
> > Hi All,
> >
> > I am doing some tests with gluster (3.1) and have a problem of not
> > getting higher throughput than 1 Gb (yes bit!) with 4 storage
bricks.
> > My setup:
> >
> > 4 storage bricks (dualcore, 4GB mem) each with 3 sata 1Tb disks,
> > connected to a switch with 1 Gb nics. In my tests I only use 1 SATA
> > disk as a volume, per brick.
> > 1 client (2xquad core, 16 GB mem) with a 10Gb nic to the same switch
as
> > the bricks.
> >
> > When using striped of distributed configurations, with all 4 bricks
> > configured to act as a server, the performance will never be higher
than
> > just below 1 Gb! I tested with 4, 8 and 16 parallel bonnie++ runs.
> >
> > The idea is that parallel bonnie's create enough files to get
> > distributed over the storage bricks. And all this bonnie's wil
deliver
> > enough throughput to fill up this 10Gb line. I expect the throughput
to
> > be maximum 4Gb because that's the maximum the 4 storage bricks
together
> > can produce.
> >
> > I also tested the throughput of the network with iperf3 and got:
> > - 5Gb to a second temporary client on another switch 200 Km from my
> > site, connected with a 5Gb fiber
> > - 908-920 Mb to the interfaces of the bricks.
> > So the network seems ok.
> >
> > Can someone advise me on why I don't get 4Gb? Or can someone advise
me
> > on a better setup with the equipment I have?
> >
> >
> > Peter Gotwalt
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list