[Gluster-users] Poor performance on a server-class system vs. desktop
alpha754293 at hotmail.com
Thu Nov 26 03:33:53 UTC 2020
Is there a way to check and see if the GlusterFS write requests is being routed through the network interface?
I am asking this because of your bricks/host definition as you showed below.
From: gluster-users-bounces at gluster.org <gluster-users-bounces at gluster.org> on behalf of Strahil Nikolov <hunter86_bg at yahoo.com>
Sent: November 25, 2020 12:42 PM
To: Dmitry Antipov <dmantipov at yandex.ru>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Poor performance on a server-class system vs. desktop
Having the same performance on 2 very fast disks indicate that you are
hitting a limit.
You can start with this article:
Most probably increasing the performance.io-thread-count could help.
В 19:08 +0300 на 25.11.2020 (ср), Dmitry Antipov написа:
> I'm trying to investigate the poor I/O performance results
> observed on a server-class system vs. the desktop-class one.
> The second one is 8-core notebook with NVME disk. According to
> fio --name test --filename=XXX --bs=4k --rw=randwrite --
> ioengine=libaio --direct=1 \
> --iodepth=128 --numjobs=1 --runtime=60 --time_based=1
> this disk is able to perform 4K random writes at ~100K IOPS. When I
> create the
> glusterfs volume using the same disk as backing store:
> Volume Name: test1
> Type: Replicate
> Volume ID: 87bad2a9-7a4a-43fc-94d2-de72965b63d6
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Brick1: 192.168.1.112:/glusterfs/test1-000
> Brick2: 192.168.1.112:/glusterfs/test1-001
> Brick3: 192.168.1.112:/glusterfs/test1-002
> Options Reconfigured:
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> and run:
> I'm seeing ~10K IOPS. So adding an extra layer (glusterfs :-) between
> an I/O client
> (fio in this case) and NVME disk introduces ~10x overhead. Maybe
> worse than expected,
> but the things goes even worse when I'm switching to the server.
> The server is 32-core machine with NVME disk capable to serve the
> same I/O pattern
> at ~200K IOPS. I've expected something similar to linear scalability,
> i.e. ~20K
> IOPS then running the same fio workload on a gluster volume. But I
> got something very close to the same ~10K IOPS as seen on the
> desktop-class machine.
> So, here is ~20x overhead vs. ~10x one on the desktop.
> The OSes are different (Fedora Core 33 on a notebook and relatively
> old Debian 9 on
> server), but both systems runs the fairly recent 5.9.x kernels
> (without massive tricky
> tuning via sysctl or similar methods) and glusterfs 8.2, using XFS as
> the filesystem
> under the bricks.
> I would greatly appreciate any ideas on debugging this.
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
Community Meeting Calendar:
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Gluster-users mailing list
Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users