[Gluster-users] setvolume failed (Stale NFS file handle) when volfile is changed
mki-glusterfs at mozone.net
mki-glusterfs at mozone.net
Sat Nov 13 02:06:55 UTC 2010
Hi
When the client volume file as supplied by one of the servers in a
distribute/replicate setup changes, my clients can't remount the
filesystem correctly. Turning on debug mode shows these messages:
[2010-11-13 01:46:45] D [client-protocol.c:6178:client_setvolume_cbk]
10.12.47.106-3: setvolume failed (Stale NFS file handle)
The config was generated using glusterfs-volgen. All I was trying
to accomplish was comment out statprefetch volume definition and
remount the fs but remounting results in only the first
primary/backup server in the replicate group to get mounted. Heck
if I even change the transport.remote-port to just read report-port
and update the config, the clients cant mount the filesystem anymore.
The moment I revert the config back, then they are fine...
This is with 3.0.4, although I've seen this happen with 3.0.5
as well. Yes I know 3.1 is out, but I'm not comfortable moving
to it just yet, so it's not an option...
If I copy that exact volfile to the client and then use that to
mount the filesystem, it has no problems...
Any ideas as to what is going on here? Why would changing the
client volume file on the volfile server break the mount?
Thanks.
Mohan
More information about the Gluster-users
mailing list