[Gluster-users] ZFS + GlusterFS raid5 low read performance

Yann Maupu yann.maupu at clustervision.com
Mon Jan 23 09:45:02 UTC 2017


Hi Xavier,

Thanks a lot for your message, it really improved the performance.
I updated both the recordsize to 256K on all nodes, and thread-count to 8
and now the read performance got even better than write.

Regards,
Yann

On 19 January 2017 at 13:09, Xavier Hernandez <xhernandez at datalab.es> wrote:

> Hi Yann,
>
> On 19/01/17 09:35, Yann MAUPU wrote:
>
>> Hi everyone,
>>
>> I am currently working on a project for which I am using:
>>
>>   * 3 storage nodes connected with Omnipath
>>   * 6 sata 750GB HDD per node (total of 18 disks)
>>
>> I created a ZFS raidZ1 on each node (5 disks + 1) and used GlusterFS in
>> raid5 mode between the 3 nodes.
>>
>> Unfortunately, with (very) big files, I experience a quite low
>> read-performance, compared to write-perf (write=700MB/s while read=320
>> MB/s).
>>
>> Do you know tuning/optimization parameters which could help me get
>> *better read-performance* ?
>>
>
> You can try to set client.event-threads and server.event-threads to higher
> values. For example 4 or 8 (default is 2).
>
> gluster volume set <volname> client.event-threads 8
> gluster volume set <volname> server.event-threads 8
>
> Xavi
>
>
>>
>> Here's more information on the configuration:
>>
>> _ZFS raidz1 on each node:_
>>
>> /# zpool list////NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP
>> DEDUP  HEALTH  ALTROOT////tank  4,06T  10,5M  4,06T         -     0%
>>  0%  1.00x  ONLINE  -///
>>
>> /# zpool status////  pool: tank//// state: ONLINE////  scan: none
>> requested////config:////    NAME        STATE     READ WRITE CKSUM////
>> tank        ONLINE       0     0     0////      raidz1-0  ONLINE       0
>>  0     0////        sda     ONLINE       0     0     0////        sdb
>>  ONLINE       0     0     0////        sdc     ONLINE       0     0
>>  0////        sdd     ONLINE       0     0     0////        sde     ONLINE
>>      0     0     0////        sdf     ONLINE       0     0     0////errors:
>> No known data errors///
>>
>> The command used to create the volume:
>>
>> # zpool create -f tank raidz sda sdb sdc sdd sde sdf
>>
>> When running IOR on each node, I get about /write perf = 460 MB/s/ and
>> /read perf = 430 MB/s/ (writing 1024 GiB with zfer_size=16MiB).
>>
>>
>> _GlusterFS raid5 through TCP (IPoIB) between the 3 nodes:_
>>
>> # gluster volume create raid55 disperse 3 redundancy 1
>> sm01.opa:/tank/ec_point sm02.opa:/tank/ec_point sm03.opa:/tank/ec_point
>>
>>
>> There is a big difference between the read-performance on each ZFS node
>> and with Gluster.
>>
>> Thanks in advance :)
>>
>> Yann
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170123/23ceceb3/attachment.html>


More information about the Gluster-users mailing list