[Gluster-users] NFS problem

Shehjar Tikoo shehjart at gluster.com
Sat Feb 6 09:40:23 UTC 2010

Jonas Bulthuis wrote:
> Hi Shehjar,
> Thanks for your reply. We may be interested in testing a alpha version
> in the future. I cannot tell for sure right now, but if you can send me
> an e-mail at the time this version becomes available, we can see if we
> can fit it in.
> We're currently running the Gluster FS on Ubuntu (LTS) servers. I can
> access the volumes though the Gluster client on the same machines. Do
> you know whether it's possible to export the Gluster client mount point
> through nfs-kernel-server instead of the user space NFS server? or would
> that be unwise?

It is possible but it is not real solution. Due to the way knfsd
talks to FUSE, some amount of state in the kernel needs to be kept
around indefinitely, which causes problems of excessive memory usage.
unfsd does not cause such a problem.


> Kind regards / Met vriendelijke groet,
> Jonas Bulthuis
> Shehjar Tikoo wrote:
>> Hi
>> Due to time constraints, booster has gone untested for the last couple
>> of months. I suggest using unfsd over fuse for the time
>> being. We'll be releasing an alpha of the NFS translator
>> somewhere in March. Let me know if you'd be interested in doing
>> early testing?
>> Thanks
>> -Shehjar
>> Jonas Bulthuis wrote:
>>> Hello,
>>> I'm using Gluster with cluster/replicate on two servers. On each of
>>> these servers I'm exporting the replicated volume through the UNFSv3
>>> booster provided by Gluster.
>>> Multiple nfs clients are using these storage servers and most of the
>>> time it seems to work fine. However, sometimes the clients give error
>>> messages about a 'Stale NFS Handle' when trying to get a directory
>>> listing of some directory on the volume (not all directories gave this
>>> problem). Yesterday it happened after reinstalling the client machines.
>>> All the client machines had the same problem. Rebooting the client
>>> machines did not help. Eventually, restarting the UNFSv3 server solved
>>> the problem.
>>> At least the problem disappeared for now, but, as it happened twice in a
>>> short time now, it seems likely that it will occur again.
>>> Does anyone have any suggestion on how to permanently solve this problem?
>>> This is the nfs booster configuration we're currently using:
>>> /etc/glusterfs/cache_acceptation-tcp.vol /nfsexport_acceptation
>>> glusterfs
>>> subvolume=cache_acceptation,logfile=/usr/local/var/log/glusterfs/booster_acceptation.log,loglevel=DEBUG,attr_timeout=0
>>> Any help will be very much appreciated. Thanks in advance.

More information about the Gluster-users mailing list