[Gluster-users] NFS mounts with glusterd on localhost - reliable or not?
Krishna Srinivas
ksriniva at redhat.com
Thu Jul 19 09:16:24 UTC 2012
It was pretty confusing to read this thread. Hope I can clarify the
questions here.
The original question by Tomasz was whether the behavior seen in
https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-2320 is still seen
in 3.3.0 - yes - the deadlock can not be avoided and still seen when
the machine is running low on memory as a write call by gluster-nfs
process triggers for an nfs-client cache flush in kernel which in turn
tries to write the cached data to an already blocked glusterfs-nfs.
Hence avoid this kind of setup.
The other discussion in this thread was related to NLM which has been
implemented in 3.3.0. This is to support locking calls from the NFS
clients to support fcntl() locking for the applications running on nfs
client. NLM server is implemented in glusterfs as well as kernel. NLM
server implemented in kernel is used by kernel-nfsd as well as
kernel-nfs-client. Hence if you have an nfs mount point, the
kernel-nfs-client automatically starts kernel NLM server. So if
glusterfs-nfs process is already running on a system (and hence it
also runs its own NLM server) and if you try to do "mount -t nfs
someserver:/export /mnt/nfs" on the same system it fails as
kernel-nfs-client won't be able to start kernel-NLM-server (because
glusterfs NLM server would have already registered with portmapper for
NLM service and hence kernel-NLM-server registration with portmapper
fails). Workaround is "mount -t nfs -o nolock someserver:/export
/mnt/nfs" if you really want to have an nfs mount on the same machine
where glusterfs-nfs process is running.
More information about the Gluster-users
mailing list