[Gluster-devel] Problem during reproducing smallfile experiment on Gluster 10

Strahil Nikolov hunter86_bg at yahoo.com
Thu Jan 20 09:11:16 UTC 2022


Also,it's worth selecting noop/none as I/O scheduler in VMs as deadline (or other schedulers) reorders the I/O requests (and thus delays I/O) , so the Hypervisor do the same (reordering and merging requests from multiple VMs).
Also, mount options and FS play significant role.For example, using noatime/relatime on the bricks reduces the ammount of unnecessary I/O. On top of that, if you use SELINUX, I would recommend you to use 'context="system_u:object_r:glusterd_brick_t:s0" ' (remove the single quotes) which tells the kernel that the brick contains only objects of type glusterd_brick_t and skips reading the SELINUX Label .
It was discussed several times in the lists that with more threads, at some point a locking contention is observed - leading to poor performance.
In VmWare you should tune the VM for high performance and low lattency and also disable Large Receive Offload for the gluster NICs as lattency introduced can interfere with your results . Disabling LRO increases cpu consumption, so adjust the number of cores.
Using Hyperthreading on Hypervisor level could also show differences in your results. The second thread of a core is not as performant as you need.
Another tunable that you can enable is the rhgs-random-io tuned profile.[main]include=throughput-performance
[sysctl]vm.dirty_ratio = 5vm.dirty_backgroud_ratio = 2

>From Infra perspective there are a lot of tunables (even some I didn't mention) for VmWare Hypervisor, VM and OS.
Best Regards,Strahil Nikolov

 
 
  On Thu, Jan 20, 2022 at 10:42, Yaniv Kaul<ykaul at redhat.com> wrote:   -------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20220120/cb177ffe/attachment.html>


More information about the Gluster-devel mailing list