[Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed

Craig Carl craig at gluster.com
Mon Nov 15 09:57:18 UTC 2010

Mohan - 
With versions of Gluster pre 3.1 any changes to the Gluster configuration, including adding servers (bricks) requires any gluster services on all servers and clients be stopped simultaneously, the new vol files installed, then gluster restarted. 
Version 3.1 introduced dynamic volumes, eliminating that requirement. 


Craig Carl 

Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.carl at gmail.com 

From: mki-glusterfs at mozone.net 
To: gluster-users at gluster.org 
Sent: Monday, November 15, 2010 1:45:19 AM 
Subject: Re: [Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed 

On Mon, Nov 15, 2010 at 03:20:43AM -0600, Craig Carl wrote: 
> > When the client volume file as supplied by one of the servers in a 
> > distribute/replicate setup changes, my clients can't remount the 
> > filesystem correctly. Turning on debug mode shows these messages: 
> > 
> > [2010-11-13 01:46:45] D [client-protocol.c:6178:client_setvolume_cbk] 
> > setvolume failed (Stale NFS file handle) 
> All the client and server volume files must be in sync, having different 
> client vol files on different clients will result in these types of errors, 
> it is also the primary cause of split-brain, so please be cautious when 
> making these kind of changes. 

Thanks Craig! On a related note, if that's the case wouldn't that mean 
that adding new bricks requires that you unmount all the client nodes 
first before you can even attempt to remount the filesystem on them? Or 
is the typical approach to adding new bricks to copy the updated volume 
file to the client nodes, and mount the filesystem that way until all 
your client nodes have successfully unmounted the old config? 


Gluster-users mailing list 
Gluster-users at gluster.org 

More information about the Gluster-users mailing list