[Gluster-users] ZFS + GlusterFS raid5 low read performance

Yann MAUPU yann.maupu at clustervision.com
Thu Jan 19 08:35:31 UTC 2017

Hi everyone,

I am currently working on a project for which I am using:

  * 3 storage nodes connected with Omnipath
  * 6 sata 750GB HDD per node (total of 18 disks)

I created a ZFS raidZ1 on each node (5 disks + 1) and used GlusterFS in 
raid5 mode between the 3 nodes.

Unfortunately, with (very) big files, I experience a quite low 
read-performance, compared to write-perf (write=700MB/s while read=320 

Do you know tuning/optimization parameters which could help me get 
*better read-performance* ?

Here's more information on the configuration:

_ZFS raidz1 on each node:_

/# zpool list////NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT////tank  4,06T  10,5M  4,06T         -     0%     0%  1.00x  ONLINE  -///

/# zpool status////   pool: tank////  state: ONLINE////   scan: none requested////config:////     NAME        STATE     READ WRITE CKSUM////     tank        ONLINE       0     0     0////       raidz1-0  ONLINE       0     0     0////         sda     ONLINE       0     0     0////         sdb     ONLINE       0     0     0////         sdc     ONLINE       0     0     0////         sdd     ONLINE       0     0     0////         sde     ONLINE       0     0     0////         sdf     ONLINE       0     0     0////errors: No known data errors///

The command used to create the volume:

# zpool create -f tank raidz sda sdb sdc sdd sde sdf

When running IOR on each node, I get about /write perf = 460 MB/s/ and 
/read perf = 430 MB/s/ (writing 1024 GiB with zfer_size=16MiB).

_GlusterFS raid5 through TCP (IPoIB) between the 3 nodes:_

# gluster volume create raid55 disperse 3 redundancy 1 sm01.opa:/tank/ec_point sm02.opa:/tank/ec_point sm03.opa:/tank/ec_point

There is a big difference between the read-performance on each ZFS node 
and with Gluster.

Thanks in advance :)


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170119/a4d7d025/attachment.html>

More information about the Gluster-users mailing list