<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">I see you’re using ZFS,what’s the pool look like? Did you set compression, relatime, xattr, & acltype? What version of zfs & gluster? What kind of CPUs /memory on the servers and any zfs tuning?<div class=""><br class=""></div><div class="">How are you mounting the storage volumes? Are you using jumbo frames? Are the VMs also on these servers, or different hosts? If hosts, how are they connected?</div><div class=""><br class=""></div><div class="">Lots of variables to look at, can you give us more info on your whole setup?</div><div class=""><br class=""><div><blockquote type="cite" class=""><hr style="border:none;border-top:solid #B5C4DF 1.0pt;padding:0 0 0 0;margin:10px 0 5px 0;" class=""><span style="margin: -1.3px 0.0px 0.0px 0.0px" id="RwhHeaderAttributes" class=""><font face="Helvetica" size="4" color="#000000" style="font: 13.0px Helvetica; color: #000000" class=""><b class="">From:</b> Thing <<a href="mailto:thing.thing@gmail.com" class="">thing.thing@gmail.com</a>></font></span><br class="">
<span style="margin: -1.3px 0.0px 0.0px 0.0px" class=""><font face="Helvetica" size="4" color="#000000" style="font: 13.0px Helvetica; color: #000000" class=""><b class="">Subject:</b> Re: [Gluster-users] Finding performance bottlenecks</font></span><br class="">
<span style="margin: -1.3px 0.0px 0.0px 0.0px" class=""><font face="Helvetica" size="4" color="#000000" style="font: 13.0px Helvetica; color: #000000" class=""><b class="">Date:</b> April 30, 2018 at 8:27:03 PM CDT</font></span><br class="">
<span style="margin: -1.3px 0.0px 0.0px 0.0px" class=""><font face="Helvetica" size="4" color="#000000" style="font: 13.0px Helvetica; color: #000000" class=""><b class="">To:</b> Tony Hoyle</font></span><br class="">
<span style="margin: -1.3px 0.0px 0.0px 0.0px" class=""><font face="Helvetica" size="4" color="#000000" style="font: 13.0px Helvetica; color: #000000" class=""><b class="">Cc:</b> Gluster Users</font></span><br class="">
<br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Hi,<div class=""><br class=""></div><div class="">So is the KVM or Vmware as the host(s)? I basically have the same setup ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware using NFS disk was pretty slow (40% of a single disk) but this was over 1gb networking which was clearly saturating. Hence I am moving to KVM to use glusterfs hoping for better performance and bonding, it will be interesting to see which host type runs faster.</div><div class=""><br class=""></div><div class="">Which operating system is gluster on? </div><div class=""><br class=""></div><div class="">Did you do iperf between all nodes?</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div><div class="gmail_extra"><br class=""><div class="gmail_quote">On 1 May 2018 at 00:14, Tony Hoyle <span dir="ltr" class=""><<a href="mailto:tony@hoyle.me.uk" target="_blank" class="">tony@hoyle.me.uk</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi<br class="">
<br class="">
I'm trying to setup a 3 node gluster, and am hitting huge performance<br class="">
bottlenecks.<br class="">
<br class="">
The 3 servers are connected over 10GB and glusterfs is set to create a 3<br class="">
node replica.<br class="">
<br class="">
With a single VM performance was poor, but I could have lived with it.<br class="">
<br class="">
I tried to stress it by putting copies of a bunch of VMs on the servers<br class="">
and seeing what happened with parallel nodes.. network load never broke<br class="">
13Mbps and disk load peaked at under 1Mbps. VMs were so slow that<br class="">
services timed out during boot causing failures.<br class="">
<br class="">
Checked the network with iperf and it reached 9.7Gb so the hardware is<br class="">
fine.. it just seems that for some reason glusterfs just isn't using it.<br class="">
<br class="">
gluster volume top gv0 read-perf shows 0Mbps for all files, although I'm<br class="">
not sure whether the command is working.<br class="">
<br class="">
There's probably a magic setting somewhere, but I've been a couple of<br class="">
days trying to find it now..<br class="">
<br class="">
Tony<br class="">
<br class="">
stats:<br class="">
Block Size: 512b+ 1024b+<br class="">
2048b+<br class="">
No. of Reads: 0 2<br class="">
0<br class="">
No. of Writes: 40 141<br class="">
399<br class="">
<br class="">
Block Size: 4096b+ 8192b+<br class="">
16384b+<br class="">
No. of Reads: 173 24<br class="">
4<br class="">
No. of Writes: 18351 5049<br class="">
2478<br class="">
<br class="">
Block Size: 32768b+ 65536b+<br class="">
131072b+<br class="">
No. of Reads: 12 113<br class="">
0<br class="">
No. of Writes: 1640 648<br class="">
200<br class="">
<br class="">
Block Size: 262144b+ 524288b+<br class="">
1048576b+<br class="">
No. of Reads: 0 0<br class="">
0<br class="">
No. of Writes: 329 55<br class="">
139<br class="">
<br class="">
Block Size: 2097152b+<br class="">
No. of Reads: 0<br class="">
No. of Writes: 1<br class="">
%-latency Avg-latency Min-Latency Max-Latency No. of calls<br class="">
Fop<br class="">
--------- ----------- ----------- ----------- ------------<br class="">
----<br class="">
0.00 0.00 us 0.00 us 0.00 us 41<br class="">
RELEASE<br class="">
0.00 0.00 us 0.00 us 0.00 us 6<br class="">
RELEASEDIR<br class="">
0.00 3.43 us 2.65 us 4.10 us 6<br class="">
OPENDIR<br class="">
0.00 217.85 us 217.85 us 217.85 us 1<br class="">
SETATTR<br class="">
0.00 66.38 us 49.47 us 80.57 us 4<br class="">
SEEK<br class="">
0.00 394.18 us 394.18 us 394.18 us 1<br class="">
FTRUNCATE<br class="">
0.00 116.68 us 29.88 us 186.25 us 16<br class="">
GETXATTR<br class="">
0.00 397.32 us 267.18 us 540.38 us 10<br class="">
XATTROP<br class="">
0.00 553.09 us 244.97 us 1242.98 us 12<br class="">
READDIR<br class="">
0.00 201.60 us 69.61 us 744.71 us 41<br class="">
OPEN<br class="">
0.00 734.96 us 75.05 us 37399.38 us 328<br class="">
READ<br class="">
0.01 1750.65 us 33.99 us 750562.48 us 591<br class="">
LOOKUP<br class="">
0.02 2972.84 us 30.72 us 788018.47 us 496<br class="">
STATFS<br class="">
0.03 10951.33 us 35.36 us 695155.13 us 166<br class="">
STAT<br class="">
0.42 2574.98 us 208.73 us 1710282.73 us 11877<br class="">
FXATTROP<br class="">
2.80 609.20 us 468.51 us 321422.91 us 333946<br class="">
RCHECKSUM<br class="">
5.04 548.76 us 14.83 us 76288179.46 us 668188<br class="">
INODELK<br class="">
18.46 149940.70 us 13.59 us 79966278.04 us 8949<br class="">
FINODELK<br class="">
20.04 395073.91 us 84.99 us 3835355.67 us 3688<br class="">
FSYNC<br class="">
53.17 131171.66 us 85.76 us 3838020.34 us 29470<br class="">
WRITE<br class="">
0.00 0.00 us 0.00 us 0.00 us 7238<br class="">
UPCALL<br class="">
0.00 0.00 us 0.00 us 0.00 us 7238<br class="">
CI_IATT<br class="">
<br class="">
Duration: 1655 seconds<br class="">
Data Read: 8804864 bytes<br class="">
Data Written: 612756480 bytes<br class="">
<br class="">
config:<br class="">
Volume Name: gv0<br class="">
Type: Replicate<br class="">
Volume ID: a0b6635a-ae48-491b-834a-<wbr class="">08e849e87642<br class="">
Status: Started<br class="">
Snapshot Count: 0<br class="">
Number of Bricks: 1 x 3 = 3<br class="">
Transport-type: tcp<br class="">
Bricks:<br class="">
Brick1: barbelith10:/tank/vmdata/gv0<br class="">
Brick2: rommel10:/tank/vmdata/gv0<br class="">
Brick3: panzer10:/tank/vmdata/gv0<br class="">
Options Reconfigured:<br class="">
diagnostics.count-fop-hits: on<br class="">
diagnostics.latency-<wbr class="">measurement: on<br class="">
features.cache-invalidation: on<br class="">
nfs.disable: on<br class="">
cluster.server-quorum-type: server<br class="">
cluster.quorum-type: auto<br class="">
network.remote-dio: enable<br class="">
cluster.eager-lock: enable<br class="">
performance.stat-prefetch: off<br class="">
<a href="http://performance.io" class="">performance.io</a>-cache: off<br class="">
performance.read-ahead: off<br class="">
performance.quick-read: off<br class="">
______________________________<wbr class="">_________________<br class="">
Gluster-users mailing list<br class="">
<a href="mailto:Gluster-users@gluster.org" class="">Gluster-users@gluster.org</a><br class="">
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank" class="">http://lists.gluster.org/<wbr class="">mailman/listinfo/gluster-users</a><br class="">
</blockquote></div><br class=""></div>
_______________________________________________<br class="">Gluster-users mailing list<br class=""><a href="mailto:Gluster-users@gluster.org" class="">Gluster-users@gluster.org</a><br class="">http://lists.gluster.org/mailman/listinfo/gluster-users</div></blockquote></div><br class=""></div></body></html>