[Gluster-users] Gluster 11 and NVME
Gilberto Ferreira
gilberto.nunes32 at gmail.com
Sat Feb 22 12:55:28 UTC 2025
Hi there.
I'd like to know if there are any issues with GlusterFS and NVME.
This week I got two customer where I build 2 Proxmox VE with GlusterFS 11.
I had have created with:
On both nodes I do:
mkdir /data1
mkdir /data2
mkfs.xfs /dev/nvme1
mkfs.xfs /dev/nvme2
mount /dev/nvme1 /data1
mount /dev/nvme2 /data2
After install glusterfs and do the peer probe, I do
gluster vol create VMS replica 2 gluster1:/data1/vms gluster2:/data1/vms
gluster1:/data2/vms gluster2:/data/vms
To solve the split-brain issue, I applied this configurations:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on
gluster vol set VMS performance.write-behind off
gluster vol set VMS performance.flush-behind off
So this configuration allows me to power down the first server and the VMs
restart on the secondary server, with no issues at all.
I have the very same scenario in another customer, but there we are working
wih SSD DC600M Kingston.
Turns out that in the servers with NVME I got a lot of disk corruption
inside the VM.
If I reboot, things go worse.
Does anybody know any cases about gluster and nvme issues like that?
Is there any fix for that?
Thanks
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250222/0615ed62/attachment.html>
More information about the Gluster-users
mailing list