[Gluster-users] stripe nfs performance

Burnash, James jburnash at knight.com
Tue Jan 19 13:52:45 UTC 2010

>From my interactions during a similar evaluation with the gluster.com folks, I've been told that striping carries a heavy performance penalty, so it was suggested that I setup using just distribute on client on top of the subvolumes.

I was also told to use the readahead and writebehind translators, as well as the iocache, locks, and posix ones (which you're probably already using).

The pertinent bit from my client config file is below:

volume distribute
  type cluster/distribute
  subvolumes client3 client4

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes distribute

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind

volume iocache
    type performance/io-cache
    option cache-size 1GB
    option cache-timeout 1
    subvolumes readahead

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread

Hopefully this helps - I have seen a measurable performance improvement under iozone (though not bonnie++) with these translators configured.

James Burnash
Unix SA

-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of tegner at renget.se
Sent: Tuesday, January 19, 2010 6:07 AM
To: gluster-users at gluster.org
Subject: [Gluster-users] stripe nfs performance

I'm in the process of evaluating gluster, and to that end I installed
Gluster Storage Platform 3.0 (using the "usb-method") on four servers. I
created a striped volume and exported it as nfs (and glusterfs).

I nfs-mounted this volume on a node and copied a file about 400 MB large,
this took about twice the time compared to copy to an ordinary nfs-mounted
partition. Are these timings reasonable?

Then I tested to copy one instance of this file from several nodes
simultaneously, but those timings did not indicate superior performance
for the gluster mounted file system: in all cases it took about twice as
long time to complete all copying to the gluster mounted system doing the
copying from two and three nodes at the same time (as compared to doing it
to a singel nfs mounted partition).

Hardware is nothing fancy, disks are old sata (about 35GB), and all nodes
sit on a gigabit switch.

I guess I'm doing something wrong here, but since I'm using the Gluster
Storage Platform there shouldn't be too many ways to go wrong ...



Gluster-users mailing list
Gluster-users at gluster.org

This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently delete the original and any copy of any e-mail and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications. http://www.knight.com

More information about the Gluster-users mailing list