[Gluster-users] Is such level of performance degradation to be expected?
hunter86_bg at yahoo.com
Wed Jan 26 06:01:03 UTC 2022
the idea of mentioning all these is to point out that you missed the basics of setting up GlusterFS and you should start from scratch.
Synthetic benchmarks are useless as you need to test with your workload. When you don't get what you need -> the next step is profiling and tuning. Profiling will be useless for synthetics...
I guess you didn't notice that you can control the ammount of server and client threads. Too few and performance will be low, too much and contention locking occurs. Sharing the volume options would have helped ;)
sysctl dirty settings are important , as all writes go "dirty" before they are flushed to disk. Setting a lower limit for starting to flush will reduce potential issues.
Also, don't expect miracles as you use the FUSE client. If you seek higher performance (after tuning everything else) - you can use the libgfapi (for example NFS-Ganesha is such).
Best Regards,Strahil Nikolov
On Mon, Jan 24, 2022 at 15:30, Sam<mygluster22 at eml.cc> wrote: Thanks for your response Strahil.
> Usually synthetic benchmarks do not show anything, because gluster has to be tuned to your real workload and not to a synth.
I understand that they do not paint the real picture. But doing same benchmark between a set of file-systems on same server should be able to throw results that can be compared?
> Also, RH recommends disks of 3-4TB each in a HW raid of 10-12 disks with a stripe size between 1M and 2M.
Next, you need to ensure that hardware alignment is properly done.
Gluster isn't interacting with the underlying RAID device here so that shouldn't matter. If the XFS layer just below gluster is giving me 3.5 GB/s random reads and writes (--rw=randrw --direct=1), why Gluster above it is struggling at 130 MB/s on the same RAID setup. That is 27 times slower.
I understand that Gluster volume may perform better when its bricks are distributed on different nodes but the fact that its performance penalty when compared to file-system its residing on it is so much high doesn't inspire much confidence.
I may be wrong here but system settings, cache settings, raid cache etc. shouldn't have any play here as its parent file-system is performing perfectly fine with the default settings.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users