[Gluster-users] NFS mounts with glusterd on localhost - reliable or not?
mangoo at wpkg.org
Fri Jul 13 08:21:15 UTC 2012
On 07/13/2012 02:59 PM, James Kahn wrote:
> Try 3.3.0 - 3.2.6 has issues with NFS in general (memory leaks, etc).
Upgrading to 3.3.0 would be quite a big adventure to me (production
site, lots of traffic etc.). But I guess it would be justified, if it
really fixes this bug.
The issue was reported earlier, but I don't see any references it was
fixed in 3.3.0:
Deadlock happens when writing a file big enough to fill the
filesystem cache and kernel is trying to flush it to free some
memory for glusterfsd which needs memory to commit some
filesystem blocks to free some memory for glusterfsd...
> -----Original Message-----
> From: Tomasz Chmielewski <mangoo at wpkg.org>
> Date: Thursday, 12 July 2012 5:56 PM
> To: Gluster General Discussion List <gluster-users at gluster.org>
> Subject: [Gluster-users] NFS mounts with glusterd on localhost - reliable
> or not?
>> are NFS mounts made on a single server (i.e. where glusterd is running)
>> supposed to be stable (with gluster 3.2.6)?
>> I'm using the following line in /etc/fstab:
>> localhost:/sites /var/ftp/sites nfs _netdev,mountproto=tcp,nfsvers=3,bg 0
>> The problem is, after some time (~1-6 hours), I'm no longer able to
>> access this mount.
>> dmesg says:
>> [49609.832274] nfs: server localhost not responding, still trying
>> [49910.639351] nfs: server localhost not responding, still trying
>> [50211.446433] nfs: server localhost not responding, still trying
>> What's worse, whenever this happens, *all* other servers in the cluster
>> (it's a 10-server distributed volume) will destabilise - their load
>> average will grow, and eventually their gluster mount becomes
>> unresponsive, too (other servers use normal gluster mounts).
>> At this point, I have to kill all gluster processes, start glusterd
>> again, mount (on servers using gluster mount).
>> Is it expected behaviour with gluster and NFS mounts on localhost? Can
>> it be caused by some kind of deadlock? Any workarounds?
>> Tomasz Chmielewski
>> Gluster-users mailing list
>> Gluster-users at gluster.org
More information about the Gluster-users