[Gluster-users] Is such level of performance degradation to be expected?
Sam
mygluster22 at eml.cc
Sun Jan 23 05:37:58 UTC 2022
Hello Everyone,
I am just starting up with Gluster so pardon my ignorance if I am doing something incorrectly. In order to test the efficiency of GlusterFS, I wanted to compare its performance with the native file system on which it resides and thus I kept both gluster server & client on localhost to discount the role of network.
"/data" is the XFS mount point of my 36 spinning disks in a RAID10 array. I got following results when I ran a fio based bench script directly on "/data".
fio Disk Speed Tests (Mixed R/W 50/50):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 302.17 MB/s (75.5k) | 1.68 GB/s (26.3k)
Write | 302.97 MB/s (75.7k) | 1.69 GB/s (26.4k)
Total | 605.15 MB/s (151.2k) | 3.38 GB/s (52.8k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 1.73 GB/s (3.3k) | 3.24 GB/s (3.1k)
Write | 1.82 GB/s (3.5k) | 3.46 GB/s (3.3k)
Total | 3.56 GB/s (6.9k) | 6.71 GB/s (6.5k)
I then created a simple gluster volume "test" under "/data" after pointing "server" to 127.0.0.1 in "/etc/hosts" and mounted it on the same server to "/mnt":
# mkdir /data/gluster
# gluster volume create test server:/data/gluster
# gluster volume start test
# mount -t glusterfs server:test /mnt
Now when I am running the same bench script on "/mnt", I am getting abysmally poor results:
fio Disk Speed Tests (Mixed R/W 50/50):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 27.96 MB/s (6.9k) | 13.96 MB/s (218)
Write | 27.98 MB/s (6.9k) | 14.59 MB/s (228)
Total | 55.94 MB/s (13.9k) | 28.55 MB/s (446)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 41.21 MB/s (80) | 126.50 MB/s (123)
Write | 42.97 MB/s (83) | 134.92 MB/s (131)
Total | 84.18 MB/s (163) | 261.42 MB/s (254)
There is plenty of free CPU & RAM available when the above test is running (70-80%) and negligible i/o wait (2-3%) so where is the bottleneck then? Or this kind of performance degradation is to be expected? Will really appreciate any insights.
Thanks,
Sam
More information about the Gluster-users
mailing list