[Gluster-users] GFS performance under heavy traffic
Strahil
hunter86_bg at yahoo.com
Wed Jan 8 06:13:24 UTC 2020
As your issue is Network, consider changing the MTU if the infrastructure is allowing it.
The tuned profiles are very important, as they control ratios for dumping data in memory to disk (this case gluster over network). You want to avoid keeping a lot of data in client's memory(in this case the gluster server), just to unleash it over network.
These 2 can be implemented online and I do not expect any issues.
Filesystem of bricks is important because the faster they soak data, the faster gluster can take more.
Of course, you need to reproduce it in test.
Also consider checking if there is any kind of backup running on the bricks. I have seen too many 'miracles' :D
Best Regards,
Strahil NikolovOn Jan 8, 2020 01:03, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hi Strahil,
>
> Thanks for that. The queue/scheduler file for the relevant disk reports "noop [deadline] cfq", so deadline is being used. It is using ext4, and I've verified that the MTU is 1500.
>
> We could change the filesystem from ext4 to xfs, but in this case we're not looking to tinker around the edges and get a small performance improvement - we need a very large improvement on the 114MBps of network traffic to make it usable.
>
> I think what we really need to do first is to reproduce the problem in testing, and then come back to possible solutions.
>
>
> On Tue, 7 Jan 2020 at 22:15, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>>
>> To find the scheduler , find all pvs of the LV is providing your storage
>>
>> [root at ovirt1 ~]# df -Th /gluster_bricks/data_fast
>> Filesystem Type Size Used Avail Use% Mounted on
>> /dev/mapper/gluster_vg_nvme-gluster_lv_data_fast xfs 100G 39G 62G 39% /gluster_bricks/data_fast
>>
>>
>> [root at ovirt1 ~]# pvs | grep gluster_vg_nvme
>> /dev/mapper/vdo_nvme gluster_vg_nvme lvm2 a-- <1024.00g 0
>>
>> [root at ovirt1 ~]# cat /etc/vdoconf.yml
>> ####################################################################
>> # THIS FILE IS MACHINE GENERATED. DO NOT EDIT THIS FILE BY HAND.
>> ####################################################################
>> config: !Configuration
>> vdos:
>> vdo_nvme: !VDOService
>> device: /dev/disk/by-id/nvme-ADATA_SX8200PNP_2J1120011596
>>
>>
>> [root at ovirt1 ~]# ll /dev/disk/by-id/nvme-ADATA_SX8200PNP_2J1120011596
>> lrwxrwxrwx. 1 root root 13 Dec 17 20:21 /dev/disk/by-id/nvme-ADATA_SX8200PNP_2J1120011596 -> ../../nvme0n1
>> [root at ovirt1 ~]# cat /sys/block/nvme0n1/queue/scheduler
>> [none] mq-deadline kyber
>>
>> Note: If device is under multipath , you need to check all paths (you can get them from 'multipath -ll' command).
>> The only scheduler you should avoid is "cfq" which was default for RHEL 6 & SLES 11.
>>
>> XFS has better performance that ext-based systems.
>>
>> Another tuning is to use Red hat's tuned profiles for gluster. You can extract them from (or newer if you find) ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.4.2.0-1.el7rhgs.src.rpm
>>
>>
>> About MTU - it's reducing the ammount of packages that the kernel has to process - but requires infrastructure to support that too. You can test by setting MTU on both sides to 9000 and then run 'tracepath remote-ip'. Also run a ping with large size without do not fragment flag -> 'ping -M do -s 8900 <
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200108/433e0f71/attachment-0001.html>
More information about the Gluster-users
mailing list