[Gluster-devel] NFS reexport status

Brent A Nelson brent at phys.ufl.edu
Mon Feb 25 07:31:28 UTC 2008

I thought I'd present my findings with regards to NFS reexport from a 
recent TLA of GlusterFS (patch 663), in case they are of use:

1) Stale NFS file handles coupled with the loss of the CWD (current 
working directory) can occur on idle client shells.  This seems to occur 
with AFR filesystems, presumably due to the possibility that the inode 
number of the directory could change, but not with unify (even with AFR 
underneath and even with the namespace AFRed, I've not encountered the 
problem, although I am now using the read-subvolume option, which may be 
avoiding the issue).

2) Aside from 1), NFS-kernel server seems to provide 100% correct NFS 
reexport, no glitches or weird errors.  However, it's extremely slow, on 
the order of 200-300KBps writes.  Does anyone have any ideas on how to 
speed this up? Nevertheless, for someone who just needs NFS reexport for 
compatibility with systems that can't run GlusterFS, and performance 
doesn't matter, this will apparently do the job.

3) Using unfs3 for NFS reexport, there is no need to disable direct-io 
(unlike with the kernel server), and performance is magnificent (I 
saturated the fast ethernet link when writing from my test client when 
direct-io was enabled, with 10.2 MBps writes, 8.3MBps with direct-io 
disabled)! Unfortunately, NFS behavior is not 100% correct.  cp -a gives 
several stale NFS file handle errors, and rm -rf doesn't empty some 
directories and therefore fails (at least on a Solaris NFS client). 
Because of the glitches, unfs3 is probably not really usable right now, 
despite the impressive performance.


Brent Nelson
Director of Computing
Dept. of Physics
University of Florida

More information about the Gluster-devel mailing list