[Gluster-users] Understanding gluster performance

Gionatan Danti g.danti at assyoma.it
Mon Jan 20 15:30:11 UTC 2020


Hi all,
I would like to better understand gluster performance and how to 
profile/analyze them.

I set up two old test machine each with a quad-core i7 CPU, 8 GB RAM and 
4x 5400 RPM disks in software RAID 10. OS is CentOS 8.1 and I am using 
Gluster 6.7. To avoid being limited by the mechanical HDD, I created a 
small replica 2 volume using /dev/shm/gluster for both bricks.

Testing using fio with 4k random write and fsync=1, I see the following:
- write to /dev/shm: almost 300K IOPs (expected, as I am writing in 
memory);
- write to /mnt/fuse (where the volume is mounted): 250 IOPs;
- as above, but something I get as low as 20-30 IOPs for minutes.

Writing to /mnt/fuse without fsync=1 I get about 5500 IOPs, without the 
strange (and very low) drops. In this case, I seems limited by CPU 
because both glusterd and glusterfsd are near/above 100% core 
utilization.

Further details and volume info can be found below [1]

So, I have some questions:
- why performance are so low with fsync?
- why do I have so low IOPs (20/30) for minutes?
- what is capping the non-fsync test?
- why both glusterd and glusterfd are so CPU intensive? I can understand 
glusterfd itself requiring more CPU, but glusterd should only manage the 
other processes and send the volume information to asking client, right?

Thanks.

[1] [root at blackhole gluster]# gluster volume info tv0
Volume Name: tv0
Type: Replicate
Volume ID: 96534afe-bfde-4d60-a94d-379278ab45c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: singularity:/dev/shm/gluster
Brick2: blackhole:/dev/shm/gluster
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

[root at blackhole tmp]# mount | grep glusterfs
localhost:tv0 on /mnt/fuse type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

[root at blackhole glusterfs]# fio --name=test 
--filename=/mnt/fuse/test.img --size=256M --rw=randwrite --fsync=1
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=psync, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][11.6%][r=0KiB/s,w=976KiB/s][r=0,w=244 IOPS][eta 
04m:19s]

[root at blackhole tmp]# fio --name=test --filename=/mnt/fuse/test.img 
--size=256M --rw=randwrite --fsync=1
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=psync, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][1.8%][r=0KiB/s,w=112KiB/s][r=0,w=28 IOPS][eta 
38m:16s]

[root at blackhole tmp]# fio --name=test --filename=/mnt/fuse/test.img 
--size=256M --rw=randwrite
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=psync, iodepth=1
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=21.3MiB/s][r=0,w=5462 IOPS][eta 
00m:00s]

[root at blackhole gluster]# gluster volume heal tv0 info
Brick singularity:/dev/shm/gluster
Status: Connected
Number of entries: 0
Brick blackhole:/dev/shm/gluster
Status: Connected
Number of entries: 0

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8


More information about the Gluster-users mailing list