[Gluster-users] trouble combining nufa, distribute and replicate

Jeff Darcy jdarcy at redhat.com
Tue Jul 6 16:30:35 UTC 2010

On 07/06/2010 11:19 AM, Matthias Munnich wrote:
> Hi!
> I am trying to combine nufa, distribute and replicate but am running in to 
> messages like
> ls: cannot open directory .: Stale NFS file handle
> When I try to list in the mounted directory.  I don't use NFS at all and am
> puzzled as to what is going on.  Attached you find my client config file.  
> The comments marked "ok" are setups which work. However, more than
> one disk is local which let me to use 3 layers:
> 1: replicate, 2: distribute: 3: nufa
> but somehow this is not working. Does anybody spot what is wrong?
> Any help is appreciated. 

First, you can pretty much ignore the reference to NFS.  It's just a bad
errno-to-string conversion.

Second, it seems like there are several places where we treat ESTALE
specially, but only one in the I/O path where we generate it.  That one
is in dht-common.c, which is shared between distribute and nufa.  The
function is dht_revalidate_cbk, and the ESTALE comes from detecting that
the dht "layout" structure is inconsistent.  This leads me to wonder
whether the problem has to do with the fact that distribute/nufa both
use this code and the same set of extended attributes, and might be
stepping on each other.  In general, having explored in some depth how
these translators work, the idea of stacking nufa/distribute on top of
one another (or themselves) makes me a bit queasy.

>From your volfile, it looks like you want to create files on one of two
filesystems replicated between localhost and mentha, and look for files
created elsewhere on dahlia and salvia.  Assuming the four nodes are
similar, you might want to consider using nufa with local-volume-name
set to one of the two replicated subvolumes, and let mentha use the
other replicated subvolume for the other direction.  Also, you should be
able to use the localhost filesystems with just storage/posix instead of
protocol/client (I assume you must have a separate glusterfsd running
for this setup to work) which would eliminate some context switches and
another layer of translator hierarchy.  See
for further examples and explanation, and good luck.

More information about the Gluster-users mailing list