[Gluster-users] Poor gluster performance on large files.

Brandon Bates brandon at brandonbates.com
Mon Oct 30 15:58:33 UTC 2017


Client-io-threads ON, server.event-threads 8, client.event-threads 8
900MB/s Write, 320MB/s Read

Client-io-threads OFF, server.event-threads 8, client.event-threads 8
873MB/s Write, 115MB/s Read

Client-io-threads OFF, server.event-threads 1, client.event-threads 2
876MB/s Write, 267MB/s Read

Client-io-threads ON, server.event-threads 1, client.event-threads 2
943MB/s Write, 275MB/s Read

> On Oct 30, 2017, at 3:44 AM, Karan Sandha <ksandha at redhat.com> wrote:
> 
> Hi Brandon,
> 
> Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default.  
> 
> Thanks & Regards
> 
>> On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote:
>> Hi gluster users,
>> I've spent several months trying to get any kind of high performance out of gluster.  The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test).  Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s.  I have really been counting on / touting Gluster as being the way of the future for us.  However I can't justify cutting our performance to a mere 13% of non-gluster speeds.  I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :(
>>  
>> If anyone can please help me understand where I might be going wrong it would be absolutely wonderful!
>>  
>> Server 1:
>> Single E5-1620 v2
>> Ubuntu 14.04
>> glusterfs 3.10.5
>> 16GB Ram
>> 24 drive array on LSI raid
>> Sustained >1.5GB/s to XFS (77TB)
>>  
>> Server 2:
>> Single E5-2620 v3
>> Ubuntu 16.04
>> glusterfs 3.10.5
>> 32GB Ram
>> 36 drive array on LSI raid
>> Sustained >2.5GB/s to XFS (164TB)
>>  
>> Speed tests are done with local with single thread  (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files.
>>  
>> Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch.  Iperf throughput numbers are single stream >9000Mbit/s
>>  
>> Here is my current gluster performance:
>>  
>> Single brick on server 1 (server 2 was similar):
>> Fuse mount:
>> 1000MB/s write
>> 325MB/s read
>>  
>> Distributed only servers 1+2:
>> Fuse mount on server 1:
>> 900MB/s write iozone 4 streams
>> 320MB/s read iozone 4 streams
>> single stream read 91MB/s @64K, 141MB/s @1M
>> simultaneous iozone 4 stream 5G files
>> Server 1: 1200MB/s write, 200MB/s read
>> Server 2: 950MB/s write, 310MB/s read
>>  
>> I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good.
>>  
>> These are the only volume settings tweaks I have  made (after much single box testing to find what actually made a difference):
>> performance.cache-size 1GB   (Default 23MB)
>> performance.client-io-threads on
>> performance.io-thread-count 64
>> performance.read-ahead-page-count       16
>> performance.stat-prefetch on
>> server.event-threads 8 (default?)
>> client.event-threads 8
>>  
>> Any help given is appreciated!
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> -- 
> KARAN SANDHA
> QUALITY ENGINEER
> Red Hat Bangalore
> ksandha at redhat.com    M: 9888009555     IM: Karan on @irc
> 
> 	
> TRIED. TESTED. TRUSTED.
> @redhatnews   Red Hat   Red Hat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171030/c3b31c48/attachment.html>


More information about the Gluster-users mailing list