[Gluster-users] Finding performance bottlenecks

Darrell Budic budic at onholyground.com
Tue May 1 14:47:14 UTC 2018


I see you’re using ZFS,what’s the pool look like? Did you set compression, relatime, xattr, & acltype? What version of zfs & gluster? What kind of CPUs /memory on the servers and any zfs tuning?

How are you mounting the storage volumes? Are you using jumbo frames? Are the VMs also on these servers, or different hosts? If hosts, how are they connected?

Lots of variables to look at, can you give us more info on your whole setup?

> From: Thing <thing.thing at gmail.com>
> Subject: Re: [Gluster-users] Finding performance bottlenecks
> Date: April 30, 2018 at 8:27:03 PM CDT
> To: Tony Hoyle
> Cc: Gluster Users
> 
> Hi,
> 
> So is the KVM or Vmware as the host(s)?  I basically have the same setup ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.  I do notice with vmware using NFS disk was pretty slow (40% of a single disk) but this was over 1gb networking which was clearly saturating.  Hence I am moving to KVM to use glusterfs hoping for better performance and bonding, it will be interesting to see which host type runs faster.
> 
> Which operating system is gluster on?  
> 
> Did you do iperf between all nodes?
> 
> 
> 
> 
> 
> On 1 May 2018 at 00:14, Tony Hoyle <tony at hoyle.me.uk <mailto:tony at hoyle.me.uk>> wrote:
> Hi
> 
> I'm trying to setup a 3 node gluster, and am hitting huge performance
> bottlenecks.
> 
> The 3 servers are connected over 10GB and glusterfs is set to create a 3
> node replica.
> 
> With a single VM performance was poor, but I could have lived with it.
> 
> I tried to stress it by putting copies of a bunch of VMs on the servers
> and seeing what happened with parallel nodes..  network load never broke
> 13Mbps and disk load peaked at under 1Mbps.  VMs were so slow that
> services timed out during boot causing failures.
> 
> Checked the network with iperf and it reached 9.7Gb so the hardware is
> fine.. it just seems that for some reason glusterfs just isn't using it.
> 
> gluster volume top gv0 read-perf shows 0Mbps for all files, although I'm
> not sure whether the command is working.
> 
> There's probably a magic setting somewhere, but I've been a couple of
> days trying to find it now..
> 
> Tony
> 
> stats:
>    Block Size:                512b+                1024b+
> 2048b+
>  No. of Reads:                    0                     2
>      0
> No. of Writes:                   40                   141
>    399
> 
>    Block Size:               4096b+                8192b+
> 16384b+
>  No. of Reads:                  173                    24
>      4
> No. of Writes:                18351                  5049
>   2478
> 
>    Block Size:              32768b+               65536b+
> 131072b+
>  No. of Reads:                   12                   113
>      0
> No. of Writes:                 1640                   648
>    200
> 
>    Block Size:             262144b+              524288b+
> 1048576b+
>  No. of Reads:                    0                     0
>      0
> No. of Writes:                  329                    55
>    139
> 
>    Block Size:            2097152b+
>  No. of Reads:                    0
> No. of Writes:                    1
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>    Fop
>  ---------   -----------   -----------   -----------   ------------
>   ----
>       0.00       0.00 us       0.00 us       0.00 us             41
> RELEASE
>       0.00       0.00 us       0.00 us       0.00 us              6
> RELEASEDIR
>       0.00       3.43 us       2.65 us       4.10 us              6
> OPENDIR
>       0.00     217.85 us     217.85 us     217.85 us              1
> SETATTR
>       0.00      66.38 us      49.47 us      80.57 us              4
>   SEEK
>       0.00     394.18 us     394.18 us     394.18 us              1
> FTRUNCATE
>       0.00     116.68 us      29.88 us     186.25 us             16
> GETXATTR
>       0.00     397.32 us     267.18 us     540.38 us             10
> XATTROP
>       0.00     553.09 us     244.97 us    1242.98 us             12
> READDIR
>       0.00     201.60 us      69.61 us     744.71 us             41
>   OPEN
>       0.00     734.96 us      75.05 us   37399.38 us            328
>   READ
>       0.01    1750.65 us      33.99 us  750562.48 us            591
> LOOKUP
>       0.02    2972.84 us      30.72 us  788018.47 us            496
> STATFS
>       0.03   10951.33 us      35.36 us  695155.13 us            166
>   STAT
>       0.42    2574.98 us     208.73 us 1710282.73 us          11877
> FXATTROP
>       2.80     609.20 us     468.51 us  321422.91 us         333946
> RCHECKSUM
>       5.04     548.76 us      14.83 us 76288179.46 us         668188
> INODELK
>      18.46  149940.70 us      13.59 us 79966278.04 us           8949
> FINODELK
>      20.04  395073.91 us      84.99 us 3835355.67 us           3688
>  FSYNC
>      53.17  131171.66 us      85.76 us 3838020.34 us          29470
>  WRITE
>       0.00       0.00 us       0.00 us       0.00 us           7238
> UPCALL
>       0.00       0.00 us       0.00 us       0.00 us           7238
> CI_IATT
> 
>     Duration: 1655 seconds
>    Data Read: 8804864 bytes
> Data Written: 612756480 bytes
> 
> config:
> Volume Name: gv0
> Type: Replicate
> Volume ID: a0b6635a-ae48-491b-834a-08e849e87642
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: barbelith10:/tank/vmdata/gv0
> Brick2: rommel10:/tank/vmdata/gv0
> Brick3: panzer10:/tank/vmdata/gv0
> Options Reconfigured:
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> features.cache-invalidation: on
> nfs.disable: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users <http://lists.gluster.org/mailman/listinfo/gluster-users>
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180501/10c20eaa/attachment.html>


More information about the Gluster-users mailing list