[Gluster-users] Poor gluster performance on large files.

Brandon Bates brandon at brandonbates.com
Fri Oct 27 16:11:09 UTC 2017


Unfortunately I'm not in a position to try that now.  The first server is
(and has been) in production as the main file server, the second would have
been a candidate for trying but I've had to start staging data there so I
can't now.

-----Original Message-----
From: Bartosz Zieba [mailto:kontakt at avatat.pl] 
Sent: Friday, October 27, 2017 1:29 AM
To: Brandon Bates
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] Poor gluster performance on large files.


Why don't you set LSI to passtrough mode and set one brick per HDD?


Regards, 
Bartosz

Wiadomość napisana przez Brandon Bates <brandon at brandonbates.com> w dniu
27.10.2017, o godz. 08:47:



Hi gluster users,
I've spent several months trying to get any kind of high performance out of
gluster.  The current XFS/samba array is used for video editing and
300-400MB/s for at least 4 clients is minimum (currently a single windows
client gets at least 700/700 for a single client over samba, peaking to 950
at times using blackmagic speed test).  Gluster has been getting me as low
as 200MB/s when the server can do well over 1000MB/s.  I have really been
counting on / touting Gluster as being the way of the future for us.
However I can't justify cutting our performance to a mere 13% of non-gluster
speeds.  I've started to reach a give up point and really need some
help/hope otherwise I'll just have to migrate the data from server 1 to
server 2 just like I've been doing for the last decade. :(
 
If anyone can please help me understand where I might be going wrong it
would be absolutely wonderful!
 
Server 1:
Single E5-1620 v2
Ubuntu 14.04
glusterfs 3.10.5
16GB Ram
24 drive array on LSI raid
Sustained >1.5GB/s to XFS (77TB)
 
Server 2:
Single E5-2620 v3
Ubuntu 16.04
glusterfs 3.10.5
32GB Ram
36 drive array on LSI raid
Sustained >2.5GB/s to XFS (164TB)
 
Speed tests are done with local with single thread (dd) or 4 threads
(iozone) using my standard 64k io size to 20G or 5G files (20G for local
drives, 5G for gluster) files.
 
Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with
802.11ad LAG to a Quanta LB6-M switch.  Iperf throughput numbers are single
stream >9000Mbit/s
 
Here is my current gluster performance:
 
Single brick on server 1 (server 2 was similar):
Fuse mount:
1000MB/s write
325MB/s read
 
Distributed only servers 1+2:
Fuse mount on server 1:
900MB/s write iozone 4 streams
320MB/s read iozone 4 streams
single stream read 91MB/s @64K, 141MB/s @1M
simultaneous iozone 4 stream 5G files
Server 1: 1200MB/s write, 200MB/s read
Server 2: 950MB/s write, 310MB/s read
 
I did some earlier single brick tests with samba VFS and 3 workstations and
got up to 750MB/s write and 800MB/s read aggregate but that's still not
good.
 
These are the only volume settings tweaks I have made (after much single box
testing to find what actually made a difference):
performance.cache-size 1GB   (Default 23MB)
performance.client-io-threads on
performance.io-thread-count 64
performance.read-ahead-page-count       16
performance.stat-prefetch on
server.event-threads 8 (default?)
client.event-threads 8
 
Any help given is appreciated!

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171027/c34c0c26/attachment.html>


More information about the Gluster-users mailing list