[Gluster-users] How to maintain HA using NFS clients if the NFS daemon process gets killed on a gluster node?
Kris Laib
Kris.Laib at nwea.org
Wed Jan 27 16:09:46 UTC 2016
Hi all,
We're getting ready to roll out Gluster using standard NFS from the clients, and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but recently had an issue where there wasn't enough memory on one of the gluster nodes in a test cluster, and OOM killer took out the NFS daemon process. Since there was still IP traffic between nodes and the gluster-based local CTDB mount for the lock file was intact, CTDB didn't kick in an initiate failover, and all clients connected to the node where NFS was killed lost their connections. We'll obviously fix the lack of memory, but going forward how can we protect against clients getting disconnected if the NFS daemon should be stopped for any reason?
Our cluster is 3 nodes, 1 is a silent witness node to help with split brain, and the other 2 host the volumes with one brick per node, and 1x2 replication.
Is there something incorrect about my setup, or is this a known downfall to using standard NFS mounts with gluster?
Thanks,
Kris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160127/40d27996/attachment.html>
More information about the Gluster-users
mailing list