[Gluster-users] Finding performance bottlenecks
Ben Turner
bturner at redhat.com
Mon May 7 15:03:09 UTC 2018
----- Original Message -----
> From: "Tony Hoyle" <tony at hoyle.me.uk>
> To: "Gluster Users" <gluster-users at gluster.org>
> Sent: Tuesday, May 1, 2018 5:38:38 AM
> Subject: Re: [Gluster-users] Finding performance bottlenecks
>
> On 01/05/2018 02:27, Thing wrote:
> > Hi,
> >
> > So is the KVM or Vmware as the host(s)? I basically have the same setup
> > ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with
> > vmware using NFS disk was pretty slow (40% of a single disk) but this
> > was over 1gb networking which was clearly saturating. Hence I am moving
> > to KVM to use glusterfs hoping for better performance and bonding, it
> > will be interesting to see which host type runs faster.
>
> 1gb will always be the bottleneck in that situation - that's going too
> max out at the speed of a single disk or lower. You need at minimum to
> bond interfaces and preferably go to 10gb to do that.
>
> Our NFS actually ends up faster than local disk because the read speed
> of the raid is faster than the read speed of the local disk.
>
> > Which operating system is gluster on?
>
> Debian Linux. Supermicro motherboards, 24 core i7 with 128GB of RAM on
> the VM hosts.
>
> > Did you do iperf between all nodes?
>
> Yes, around 9.7Gb/s
>
> It doesn't appear to be raw read speed but iowait. Under nfs load with
> multiple VMs I get an iowait of around 0.3%. Under gluster, never less
> than 10% and glusterfsd is often the top of the CPU usage. This causes
> a load average of ~12 compared to 3 over NFS, and absolutely kills VMs
> esp. Windows ones - one machine I set booting and it was still booting
> 30 minutes later!
Are you properly aligned? This sounds like the xattr reads / writes used by gluster may be eating you IOPs, this is exacerbated when storage is misaligned. I suggest getting on the latest version of oVirt(I have seen this help) and evaluate your storage stack.
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/formatting_and_mounting_bricks
pvcreate --dataalign = full stripe(RAID stripe * # of data disks)
vgcreate --extensize = full stripe
lvcreate like normal
mkfs.xfs -f -i size=512 -n size=8192 -d su=<stripe size>,sw=<number of data disks> DEVICE
And mount with:
/dev/rhs_vg/rhs_lv/mountpoint xfs rw,inode64,noatime,nouuid 1 2
I normally used tuned profile rhgs-random-io and the gluster v set group virtualization.
HTH
-b
>
> Tony
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list