[Gluster-users] GlusterFS Performance gigE
jacob at gluster.com
Tue Sep 14 14:56:22 UTC 2010
It looks to me like you are only testing write speeds and in your
environment you are using replication. Because you are using replication,
each file gets written to two servers synchronously which means the writes
will be slower than the reads. Have you tested read performance?
Jacob Shucart | Gluster
E-Mail : Jacob at gluster.com
Direct : (408)770-1504
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Henrique Haas
Sent: Monday, September 13, 2010 1:06 PM
To: Daniel Mons
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] GlusterFS Performance gigE
I've tested with a mesh of performance translators, and use real data:
My testing set have about 600K files, with files with 43KB on average, all
The total size is about 19GB.
The underline filesystem is ext4, on its default settings of Ubuntu Server
10.04 (it are configured as a LVM volume by the way).
My GlusterFS settings have used:
*Server:* storage/posix, features/locks, performance/io-threads
*Client:* 4 remote nodes > 2 Replicate > Write-Behind > IO-Threads >
QuickRead > Stat-Prefetch
Reading the documentation.. seems Write-Behind is the translator that
might improve my write speed. I leave it with 4MB as cache-size.
Ok, I did a simple "time cp -r /data /mnt", and results are not
Now, the same copy, but all files joined on a tarball file (17GB):
Thank you very much by your attention!
On Sat, Sep 11, 2010 at 8:19 PM, Daniel Mons <daemons at kanuka.com.au>
> On Sun, Sep 12, 2010 at 3:20 AM, Henrique Haas <henrique at dz6web.com>
> > Hello Jacob,
> > Greater block sizes gave me much much better results, about *58MB/s*
> > on a 1GigE !!!!
> > So.. my concern now is about smaller files be shared using Gluster.
> > Any tunning tips for these kind of files (I'm using Ext4 and Gluster
> "dd" won't give you accurate results for testing file copies. Your
> slow writes with small block sizes are more likely to high I/O and
> read starve on the client side than the server/write side.
> You should test something more real world instead. For instance:
> for i in `seq 1 1000000` ; do dd if=/dev/urandom of=$i bs=1K count=1 ;
> That will create 1,000,000 1KB files (1GB of information) with random
> data on your local hard disk in the current directory. Most file
> systems store 4K blocks, so actual disk usage will be 4GB.
> Now copy/rsync/whatever these files to your Gluster storage. (use a
> command like "time cp /blah/* /mnt/gluster/" to wallclock it).
> Now tar up all the files, and do the copy again using the single large
> tar file. Compare your results.
> From here, tune your performance translators:
> Some of these translators will aggregate smaller I/Os into larger
> blocks to improve read/write performance. The links above explain
> what each one does. My advice is to take the defaults created by
> glusterfs-volgen and increment the values slowly on the relevant
> translators (note that bigger doesn't always equal better - you'll
> find a sweet spot where performance maxes out, and then most likely
> reduces again once values get too big).
> And then continue testing. Repeat for 4K, 16K, 32K files if you like
> (or a mix of them) to match what sort of data you'd expect on your
> file system (or better yet, use real world data if you have it lying
> around already).
> Also, if you don't need atime (last access time) information on your
> files, consider mounting the ext4 file system on the storage bricks
> with the "noatime" option. This can save unnecessary I/O on regularly
> accessed files (I use this a lot on both clustered file systems as
> well as virtual machine disk images and database files that get
> touched all the time by multiple systems to reduce I/O).
> Hope that helps.
> Gluster-users mailing list
> Gluster-users at gluster.org
+55 (51) 3028.6602
More information about the Gluster-users