<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-2">
<TITLE>Message</TITLE>
<META name=GENERATOR content="MSHTML 11.00.10570.1001"></HEAD>
<BODY>
<DIV><FONT color=#0000ff size=2 face=Arial><SPAN
class=460280716-27102017>Unfortunately I'm not in a position to try that
now. The first server is (and has been) in production as the main file
server, the second would have been a candidate for trying but I've had to start
staging data there so I can't now.</SPAN></FONT></DIV>
<BLOCKQUOTE style="MARGIN-RIGHT: 0px" dir=ltr>
<DIV></DIV>
<DIV lang=en-us class=OutlookMessageHeader dir=ltr align=left><FONT size=2
face=Tahoma>-----Original Message-----<BR><B>From:</B> Bartosz Zieba
[mailto:kontakt@avatat.pl] <BR><B>Sent:</B> Friday, October 27, 2017 1:29
AM<BR><B>To:</B> Brandon Bates<BR><B>Cc:</B>
gluster-users@gluster.org<BR><B>Subject:</B> Re: [Gluster-users] Poor gluster
performance on large files.<BR><BR></FONT></DIV>Why don’t you set LSI to
passtrough mode and set one brick per HDD?<BR><BR>
<DIV>Regards,
<DIV>Bartosz</DIV></DIV>
<DIV><BR>Wiadomość napisana przez Brandon Bates <<A
href="mailto:brandon@brandonbates.com">brandon@brandonbates.com</A>>
w dniu 27.10.2017, o godz. 08:47:<BR><BR></DIV>
<BLOCKQUOTE type="cite">
<DIV>
<META name=GENERATOR content="MSHTML 11.00.10570.1001">
<DIV>
<DIV><FONT face=Arial><FONT size=2><SPAN class=468513406-27102017>Hi gluster
users,</SPAN></FONT></FONT></DIV>
<DIV><FONT face=Arial><FONT size=2><SPAN class=468513406-27102017><SPAN
class=468513406-27102017>I've spent several months trying to get any
kind of high performance out of gluster. <SPAN
class=468513406-27102017>The current XFS/samba array is used for video
editing and 300-400MB/s for at least 4 clients is minimum (currently a
single windows client gets at least 700/700 for a single client over samba,
peaking to 950 at times using blackmagic speed test). Gluster has been
getting me as low as 200MB/s when the server can do well over
1000MB/s. </SPAN></SPAN></SPAN>I have really been counting
on <SPAN class=468513406-27102017>/ touting </SPAN>Gluster <SPAN
class=468513406-27102017>as </SPAN>being the way of the future for us<SPAN
class=468513406-27102017>. However</SPAN> I can't justify cutting
our performance to a mere 13% of non-gluster<SPAN class=468513406-27102017>
speeds</SPAN>. I've started to reach a give up point and really need
some help/hope otherwise I'll just have to migrate the data from server 1 to
server 2 just like I've been doing for the last decade.<SPAN
class=468513406-27102017> </SPAN><SPAN
class=468513406-27102017>:(</SPAN></FONT></FONT></DIV>
<DIV><FONT face=Arial><FONT size=2><SPAN
class=468513406-27102017></SPAN></FONT></FONT> </DIV>
<DIV><FONT face=Arial><FONT size=2><SPAN class=468513406-27102017><SPAN
class=468513406-27102017>If anyone can please help me understand where I
might be going wrong it would be absolutely
wonderful!</SPAN></SPAN></FONT></FONT></DIV>
<DIV><FONT face=Arial><FONT size=2><SPAN
class=468513406-27102017></SPAN></FONT></FONT> </DIV></DIV>
<DIV><FONT size=2 face=Arial>Server 1:<BR>Single E5-1620 v2<BR>Ubuntu
14.04<BR>glusterfs 3.10.5<BR>16GB Ram<BR>24 drive array on LSI
raid<BR>Sustained >1.5GB/s to XFS (77TB)</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>Server 2:<BR>Single E5-2620 v3<BR>Ubuntu
16.04<BR>glusterfs 3.10.5<BR>32GB Ram<BR>36 drive array on LSI
raid<BR>Sustained >2.5GB/s to XFS (164TB)</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>Speed tests are done with local with single
thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G
files (20G for local drives, 5G for gluster) files.</FONT></DIV>
<DIV><FONT size=2 face=Arial></FONT> </DIV>
<DIV><FONT size=2 face=Arial>Servers have Intel X520-DA2 dual port 10Gbit
NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch. Iperf
throughput numbers are single stream >9000Mbit/s</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>Here is my current gluster
performance:</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>Single brick on server 1 (server 2 was
similar):<BR>Fuse mount:<BR>1000MB/s write<BR>325MB/s read</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>Distributed only servers 1+2:<BR>Fuse mount on
server 1:<BR>900MB/s write iozone 4 streams<BR>320MB/s read iozone 4
streams<BR>single stream read 91MB/s @64K, 141MB/s @1M<BR>simultaneous
iozone 4 stream 5G files<BR>Server 1: 1200MB/s write, 200MB/s read<BR>Server
2: 950MB/s write, 310MB/s read</FONT></DIV>
<DIV><FONT size=2 face=Arial></FONT> </DIV>
<DIV><FONT size=2 face=Arial>I did some earlier single brick tests with
samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read
aggregate but that's still not good.</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial>These are the only volume settings tweaks I
have made <SPAN class=468513406-27102017>(</SPAN>after much single box
testing<SPAN class=468513406-27102017> </SPAN>to find what actually made a
difference<SPAN class=468513406-27102017>)</SPAN>:<BR>performance.cache-size
1GB (Default 23MB)<BR>performance.client-io-threads on<BR><A
href="http://performance.io">performance.io</A>-thread-count
64<BR>performance.read-ahead-page-count
16<BR>performance.stat-prefetch on<BR>server.event-threads 8
(default?)<BR>client.event-threads 8</FONT></DIV>
<DIV> </DIV>
<DIV><FONT size=2 face=Arial><SPAN class=468513406-27102017>Any help given
is appreciated!</SPAN></FONT></DIV></DIV></BLOCKQUOTE>
<BLOCKQUOTE type="cite">
<DIV><SPAN>_______________________________________________</SPAN><BR><SPAN>Gluster-users
mailing list</SPAN><BR><SPAN><A
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</A></SPAN><BR><SPAN><A
href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailman/listinfo/gluster-users</A></SPAN></DIV></BLOCKQUOTE></BLOCKQUOTE></BODY></HTML>