[Gluster-users] Gluster-users Digest, Vol 59, Issue 15 - GlusterFS performance

Ben England bengland at redhat.com
Sat Mar 2 15:47:09 UTC 2013

----- Original Message -----
> From: gluster-users-request at gluster.org
> To: gluster-users at gluster.org
> Sent: Friday, March 1, 2013 4:03:13 PM
> Subject: Gluster-users Digest, Vol 59, Issue 15
> ------------------------------
> Message: 2
> Date: Fri, 01 Mar 2013 10:22:21 -0800
> From: Joe Julian <joe at julianfamily.org>
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] GlusterFS performance
> Message-ID: <5130F1DD.9050602 at julianfamily.org>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
> The kernel developers introduced a bug into ext4 that has yet to be
> fixed. If you use xfs you won't have those hangs.
> On 03/01/2013 01:30 AM, Nikita A Kardashin wrote:
> > Hello again!
> >
> > I am complete rebuild my storage.
> > As base: ext4 over mdadm-raid1
> > Gluster volume in distributed-replicated mode with settings:
> >
> > Options Reconfigured:
> > performance.cache-size: 1024MB
> > nfs.disable: on
> > performance.write-behind-window-size: 4MB
> > performance.io-thread-count: 64
> > features.quota: off
> > features.quota-timeout: 1800
> > performance.io-cache: on
> > performance.write-behind: on
> > performance.flush-behind: on
> > performance.read-ahead: on
> >
> > As result, I got write performance about 80MB/s on dd if=/dev/zero
> > of=testfile.bin bs=100M count=10, 

Make sure your network and storage bricks are performing as you expect them to, Gluster is only as good as underlying hardware.  What happens with reads?  What happens when you do multiple threads doing writes? 

for n in `seq 1 4` ; do 
  eval "dd if=/dev/zero of=testfile$n.bin bs=100M count=10 &"
time wait

> > If I try to execute above command inside virtual machine (KVM),
> > first
> > time all going right - about 900MB/s (cache effect, I think), but
> > if I
> > run this test again on existing file - task (dd) hungs up and can
> > be
> > stopped only by Ctrl+C.

In future, post qemu process command line (from ps awux).  Are you writing to "local" file system inside virtual disk image or are you mounting Gluster from inside the VM?  If you are going through /dev/vda then are you using KVM qemu cache=writeback?  You could try cache=writethrough or cache=none, see comments below for cache=none.  Also, try io=threads not io=native.  

> >
> > Overall virtual system latency is poor too. For example, apt-get
> > upgrade upgrading system very, very slow, freezing on "Unpacking
> > replacement" and other io-related steps.
> >

If you don't have a fast connection to storage, the Linux VM will buffer write data in the kernel buffer cache until it runs out of memory for that (vm.dirty_ratio), then it will freeze any process that issues the writes.    If your VM has a lot of memory relative to storage speed, this can result in very long delays.  Try reducing Linux kernel vm.dirty_background_ratio to get writes going sooner and vm.dirty_ratio so that the freezes don't last as long.  You can even reduce VM's block device queue depth.  But most of all make sure that gluster writes are performing near a typical local block device speed.

> > Does glusterfs have any tuning options, that can help me?
> >
> >

If your workload is strictly large-file, try this volume tuning:

-- storage.linux-aio: off (default)                                                                                                                
cluster.eager-lock: enable 
 (default is disabled)
network.remote-dio: on (default is off)
performance.write-behind-window-size: 1MB (default)

for pure single-thread sequential read workload, you can tune read-ahead translator to be more aggressive, this will help single-thread reads, but don't do this for other workloads, such as virtual machine images in the Gluster volume (will appear to Gluster as more of a random I/O workload).

performance.read-ahead-page-count: 16 (default is 4 128-KB prefetched buffers)


Red Hat Storage distribution will help tune Linux block device for better performance on many workloads.

More information about the Gluster-users mailing list