[Gluster-devel] [Gluster-users] lockd: server not responding, timed out

Peter Auyeung pauyeung at connexity.com
Mon Jan 26 00:26:53 UTC 2015


Hi Niels,

The question if we keep getting the lockd error even after restart and rebooted the NFS client..

Peter
________________________________________
From: Niels de Vos [ndevos at redhat.com]
Sent: Saturday, January 24, 2015 3:26 AM
To: Peter Auyeung
Cc: gluster-users at gluster.org; gluster-devel at gluster.org
Subject: Re: [Gluster-devel] [Gluster-users] lockd: server  not responding, timed out

On Fri, Jan 23, 2015 at 11:50:26PM +0000, Peter Auyeung wrote:
> We have a 6 nodes gluster running ubuntu on xfs sharing gluster
> volumes over NFS been running fine for 3 months.
> We restarted glusterfs-server on one of the node and all NFS clients
> start getting the " lockd: server  not responding, timed out" on
> /var/log/messages
>
> We are still able to read write but seems like process that require a
> persistent file lock failed like database exports.
>
> We have an interim fix to remount the NFS with nolock option but need
> to know why that is necessary all in a sudden after a service
> glusterfs-server restart on one of the gluster node

The cause that you need to mount wiht 'nolock' is that one server can
only have one NLM-service active. The Linux NFS-client uses the 'lockd'
kernel module, and the Gluster/NFS server provides its own lock manager.
To be able to use a lock manager, it needs to be registered at
rpcbind/portmapper. Only one lock manager can be registered at a time,
the 2nd one that tries to register will fail. In case the NFS-client has
registered the lockd kernel module as lock manager, any locking requests
to the Gluster/NFS service will fail and you will see those messages in
/var/log/messages.

This is one of the main reasons why it is not advised to access volumes
over NFS on a Gluster storage server. You should rather use the
GlusterFS protocol for mounting volumes locally. (Or even better,
seperate your storage servers from the application servers.)

HTH,
Niels


More information about the Gluster-devel mailing list