<div dir="ltr">Hi,<div><br></div><div>So is the KVM or Vmware as the host(s)? I basically have the same setup ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware using NFS disk was pretty slow (40% of a single disk) but this was over 1gb networking which was clearly saturating. Hence I am moving to KVM to use glusterfs hoping for better performance and bonding, it will be interesting to see which host type runs faster.</div><div><br></div><div>Which operating system is gluster on? </div><div><br></div><div>Did you do iperf between all nodes?</div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 1 May 2018 at 00:14, Tony Hoyle <span dir="ltr"><<a href="mailto:tony@hoyle.me.uk" target="_blank">tony@hoyle.me.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi<br>
<br>
I'm trying to setup a 3 node gluster, and am hitting huge performance<br>
bottlenecks.<br>
<br>
The 3 servers are connected over 10GB and glusterfs is set to create a 3<br>
node replica.<br>
<br>
With a single VM performance was poor, but I could have lived with it.<br>
<br>
I tried to stress it by putting copies of a bunch of VMs on the servers<br>
and seeing what happened with parallel nodes.. network load never broke<br>
13Mbps and disk load peaked at under 1Mbps. VMs were so slow that<br>
services timed out during boot causing failures.<br>
<br>
Checked the network with iperf and it reached 9.7Gb so the hardware is<br>
fine.. it just seems that for some reason glusterfs just isn't using it.<br>
<br>
gluster volume top gv0 read-perf shows 0Mbps for all files, although I'm<br>
not sure whether the command is working.<br>
<br>
There's probably a magic setting somewhere, but I've been a couple of<br>
days trying to find it now..<br>
<br>
Tony<br>
<br>
stats:<br>
Block Size: 512b+ 1024b+<br>
2048b+<br>
No. of Reads: 0 2<br>
0<br>
No. of Writes: 40 141<br>
399<br>
<br>
Block Size: 4096b+ 8192b+<br>
16384b+<br>
No. of Reads: 173 24<br>
4<br>
No. of Writes: 18351 5049<br>
2478<br>
<br>
Block Size: 32768b+ 65536b+<br>
131072b+<br>
No. of Reads: 12 113<br>
0<br>
No. of Writes: 1640 648<br>
200<br>
<br>
Block Size: 262144b+ 524288b+<br>
1048576b+<br>
No. of Reads: 0 0<br>
0<br>
No. of Writes: 329 55<br>
139<br>
<br>
Block Size: 2097152b+<br>
No. of Reads: 0<br>
No. of Writes: 1<br>
%-latency Avg-latency Min-Latency Max-Latency No. of calls<br>
Fop<br>
--------- ----------- ----------- ----------- ------------<br>
----<br>
0.00 0.00 us 0.00 us 0.00 us 41<br>
RELEASE<br>
0.00 0.00 us 0.00 us 0.00 us 6<br>
RELEASEDIR<br>
0.00 3.43 us 2.65 us 4.10 us 6<br>
OPENDIR<br>
0.00 217.85 us 217.85 us 217.85 us 1<br>
SETATTR<br>
0.00 66.38 us 49.47 us 80.57 us 4<br>
SEEK<br>
0.00 394.18 us 394.18 us 394.18 us 1<br>
FTRUNCATE<br>
0.00 116.68 us 29.88 us 186.25 us 16<br>
GETXATTR<br>
0.00 397.32 us 267.18 us 540.38 us 10<br>
XATTROP<br>
0.00 553.09 us 244.97 us 1242.98 us 12<br>
READDIR<br>
0.00 201.60 us 69.61 us 744.71 us 41<br>
OPEN<br>
0.00 734.96 us 75.05 us 37399.38 us 328<br>
READ<br>
0.01 1750.65 us 33.99 us 750562.48 us 591<br>
LOOKUP<br>
0.02 2972.84 us 30.72 us 788018.47 us 496<br>
STATFS<br>
0.03 10951.33 us 35.36 us 695155.13 us 166<br>
STAT<br>
0.42 2574.98 us 208.73 us 1710282.73 us 11877<br>
FXATTROP<br>
2.80 609.20 us 468.51 us 321422.91 us 333946<br>
RCHECKSUM<br>
5.04 548.76 us 14.83 us 76288179.46 us 668188<br>
INODELK<br>
18.46 149940.70 us 13.59 us 79966278.04 us 8949<br>
FINODELK<br>
20.04 395073.91 us 84.99 us 3835355.67 us 3688<br>
FSYNC<br>
53.17 131171.66 us 85.76 us 3838020.34 us 29470<br>
WRITE<br>
0.00 0.00 us 0.00 us 0.00 us 7238<br>
UPCALL<br>
0.00 0.00 us 0.00 us 0.00 us 7238<br>
CI_IATT<br>
<br>
Duration: 1655 seconds<br>
Data Read: 8804864 bytes<br>
Data Written: 612756480 bytes<br>
<br>
config:<br>
Volume Name: gv0<br>
Type: Replicate<br>
Volume ID: a0b6635a-ae48-491b-834a-<wbr>08e849e87642<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: barbelith10:/tank/vmdata/gv0<br>
Brick2: rommel10:/tank/vmdata/gv0<br>
Brick3: panzer10:/tank/vmdata/gv0<br>
Options Reconfigured:<br>
diagnostics.count-fop-hits: on<br>
diagnostics.latency-<wbr>measurement: on<br>
features.cache-invalidation: on<br>
nfs.disable: on<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
network.remote-dio: enable<br>
cluster.eager-lock: enable<br>
performance.stat-prefetch: off<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>