[Gluster-users] Gluster 11 and NVME
Strahil Nikolov
hunter86_bg at yahoo.com
Thu Feb 27 19:22:07 UTC 2025
These are just a small sample of the virt group settings that you are not using:
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=offnetwork.remote-dio=disable
performance.strict-o-direct=on
cluster.lookup-optimize=off
Here is an example file https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
Also glusterfs-server provides a lot more groups of settings in /var/lib/glusterd/groups directory.
Best Regards,
Strahil Nikolov
В събота, 22 февруари 2025 г. в 14:56:14 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:
Hi there.
I'd like to know if there are any issues with GlusterFS and NVME.This week I got two customer where I build 2 Proxmox VE with GlusterFS 11.
I had have created with:On both nodes I do:
mkdir /data1mkdir /data2mkfs.xfs /dev/nvme1mkfs.xfs /dev/nvme2mount /dev/nvme1 /data1mount /dev/nvme2 /data2
After install glusterfs and do the peer probe, I dogluster vol create VMS replica 2 gluster1:/data1/vms gluster2:/data1/vms gluster1:/data2/vms gluster2:/data/vms
To solve the split-brain issue, I applied this configurations:gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
So this configuration allows me to power down the first server and the VMs restart on the secondary server, with no issues at all.
I have the very same scenario in another customer, but there we are working wih SSD DC600M Kingston.
Turns out that in the servers with NVME I got a lot of disk corruption inside the VM.If I reboot, things go worse.
Does anybody know any cases about gluster and nvme issues like that?Is there any fix for that?
Thanks
---
Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250227/ec727e5f/attachment.html>
More information about the Gluster-users
mailing list