[Gluster-devel] Re: NFS reexport status

Brent A Nelson brent at phys.ufl.edu
Wed Feb 27 00:13:42 UTC 2008


Replying to myself...

On Mon, 25 Feb 2008, Brent A Nelson wrote:

> I thought I'd present my findings with regards to NFS reexport from a recent 
> TLA of GlusterFS (patch 663), in case they are of use:
>
> 1) Stale NFS file handles coupled with the loss of the CWD (current working 
> directory) can occur on idle client shells.  This seems to occur with AFR 
> filesystems, presumably due to the possibility that the inode number of the 
> directory could change, but not with unify (even with AFR underneath and even 
> with the namespace AFRed, I've not encountered the problem, although I am now 
> using the read-subvolume option, which may be avoiding the issue).
>

I was wrong, this does still occur, even with unified AFRs, at least when 
the namespace is also an AFR (I suspect it would work fine if namespace 
was not an AFR, but I could be wrong).  If anyone has any idea of how to 
correct this (without losing AFR redundancy), please let me know.

> 2) Aside from 1), NFS-kernel server seems to provide 100% correct NFS 
> reexport, no glitches or weird errors.  However, it's extremely slow, on the 
> order of 200-300KBps writes.  Does anyone have any ideas on how to speed this 
> up? Nevertheless, for someone who just needs NFS reexport for compatibility 
> with systems that can't run GlusterFS, and performance doesn't matter, this 
> will apparently do the job.

I found that the Solaris NFS mount option "forcedirectio" gives a massive 
speed boost.  It's still not as fast as Unfs3 (4MBps writes compared to 
10.2MBps on fast ethernet), but it's not all that bad, either (and Unfs3 
"glitches" with GlusterFS)...

Thanks,

Brent





More information about the Gluster-devel mailing list