[Gluster-devel] [Gluster-users] Remote operation failed: Stale NFS file handle

Justin Dossey jbd at podomatic.com
Tue Oct 15 21:54:57 UTC 2013


I've seen these errors too on GlusterFS 3.3.1 nodes with glusterfs-fuse
mounts.  It's particularly strange because we're not using NFS to mount the
volumes.


On Tue, Oct 15, 2013 at 1:44 PM, Neil Van Lysel <van-lyse at cs.wisc.edu>wrote:

> Hello!
>
> Many of our Gluster client nodes are seeing a lot of these errors in their
> log files:
>
> [2013-10-15 06:48:59.467263] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-6: remote operation failed: Stale NFS file handle. Path:
> /path (3cfbebf4-40e4-4300-aa6e-**bd43b4310b94)
> [2013-10-15 06:48:59.467331] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-7: remote operation failed: Stale NFS file handle. Path:
> /path (3cfbebf4-40e4-4300-aa6e-**bd43b4310b94)
> [2013-10-15 06:48:59.470554] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-0: remote operation failed: Stale NFS file handle. Path:
> /path (d662e7db-7864-4b18-b587-**bdc5e8756076)
> [2013-10-15 06:48:59.470624] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-1: remote operation failed: Stale NFS file handle. Path:
> /path (d662e7db-7864-4b18-b587-**bdc5e8756076)
> [2013-10-15 06:49:04.537548] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-3: remote operation failed: Stale NFS file handle. Path:
> /path (a4ea32e0-25f8-440d-b258-**23430490624d)
> [2013-10-15 06:49:04.537651] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-2: remote operation failed: Stale NFS file handle. Path:
> /path (a4ea32e0-25f8-440d-b258-**23430490624d)
> [2013-10-15 06:49:14.380551] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-0: remote operation failed: Stale NFS file handle. Path:
> /path (669a2d6b-2998-48b2-8f3f-**93d5f65cdd87)
> [2013-10-15 06:49:14.380663] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-1: remote operation failed: Stale NFS file handle. Path:
> /path (669a2d6b-2998-48b2-8f3f-**93d5f65cdd87)
> [2013-10-15 06:49:14.386390] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-4: remote operation failed: Stale NFS file handle. Path:
> /path (016aafa9-35ac-4f6f-90bd-**b4ac5d435ad0)
> [2013-10-15 06:49:14.386471] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-home-client-5: remote operation failed: Stale NFS file handle. Path:
> /path (016aafa9-35ac-4f6f-90bd-**b4ac5d435ad0)
> [2013-10-15 18:28:10.630357] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-2: remote operation failed: Stale NFS file handle. Path:
> /path (5d6153cc-64b3-4151-85cd-**2646c33c6918)
> [2013-10-15 18:28:10.630425] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-3: remote operation failed: Stale NFS file handle. Path:
> /path (5d6153cc-64b3-4151-85cd-**2646c33c6918)
> [2013-10-15 18:28:10.636301] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-4: remote operation failed: Stale NFS file handle. Path:
> /path (2f64b9fe-02a0-408b-9edb-**0c5e5bf0ed0e)
> [2013-10-15 18:28:10.636377] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-5: remote operation failed: Stale NFS file handle. Path:
> /path (2f64b9fe-02a0-408b-9edb-**0c5e5bf0ed0e)
> [2013-10-15 18:28:10.638574] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-5: remote operation failed: Stale NFS file handle. Path:
> /path (990de721-1fc9-461d-8412-**8c17c23ebbbd)
> [2013-10-15 18:28:10.638647] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-4: remote operation failed: Stale NFS file handle. Path:
> /path (990de721-1fc9-461d-8412-**8c17c23ebbbd)
> [2013-10-15 18:28:10.645043] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-7: remote operation failed: Stale NFS file handle. Path:
> /path (0d8d3c5a-d26e-4c15-a8d5-**987a4033a6d0)
> [2013-10-15 18:28:10.645157] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-6: remote operation failed: Stale NFS file handle. Path:
> /path (0d8d3c5a-d26e-4c15-a8d5-**987a4033a6d0)
> [2013-10-15 18:28:10.648126] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-6: remote operation failed: Stale NFS file handle. Path:
> /path (c1c84d57-f54d-4dc1-a5df-**9be563da78fb)
> [2013-10-15 18:28:10.648276] W [client-rpc-fops.c:2624:**client3_3_lookup_cbk]
> 0-scratch-client-7: remote operation failed: Stale NFS file handle. Path:
> /path (c1c84d57-f54d-4dc1-a5df-**9be563da78fb)
>
>
> How can I resolve these errors?
>
>
> *gluster --version:
> glusterfs 3.4.0 built on Jul 25 2013 04:12:27
>
>
> *gluster volume info:
> Volume Name: scratch
> Type: Distributed-Replicate
> Volume ID: 198b9d77-96e6-4c7f-9f0c-**3618cbcaa940
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: 10.129.40.21:/data/glusterfs/**brick1/scratch
> Brick2: 10.129.40.22:/data/glusterfs/**brick1/scratch
> Brick3: 10.129.40.23:/data/glusterfs/**brick1/scratch
> Brick4: 10.129.40.24:/data/glusterfs/**brick1/scratch
> Brick5: 10.129.40.21:/data/glusterfs/**brick2/scratch
> Brick6: 10.129.40.22:/data/glusterfs/**brick2/scratch
> Brick7: 10.129.40.23:/data/glusterfs/**brick2/scratch
> Brick8: 10.129.40.24:/data/glusterfs/**brick2/scratch
> Options Reconfigured:
> features.quota: off
>
> Volume Name: home
> Type: Distributed-Replicate
> Volume ID: 0d8ebafc-471e-4b16-a4a9-**787ce8616225
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: 10.129.40.21:/data/glusterfs/**brick1/home
> Brick2: 10.129.40.22:/data/glusterfs/**brick1/home
> Brick3: 10.129.40.23:/data/glusterfs/**brick1/home
> Brick4: 10.129.40.24:/data/glusterfs/**brick1/home
> Brick5: 10.129.40.21:/data/glusterfs/**brick2/home
> Brick6: 10.129.40.22:/data/glusterfs/**brick2/home
> Brick7: 10.129.40.23:/data/glusterfs/**brick2/home
> Brick8: 10.129.40.24:/data/glusterfs/**brick2/home
> Options Reconfigured:
> features.quota: off
>
>
> *gluster volume status:
> Status of volume: scratch
> Gluster process                                         Port Online  Pid
> ------------------------------**------------------------------**
> ------------------
> Brick 10.129.40.21:/data/glusterfs/**brick1/scratch       49154 Y
> 7536
> Brick 10.129.40.22:/data/glusterfs/**brick1/scratch       49154 Y
> 27976
> Brick 10.129.40.23:/data/glusterfs/**brick1/scratch       49154 Y
> 7436
> Brick 10.129.40.24:/data/glusterfs/**brick1/scratch       49154 Y
> 19773
> Brick 10.129.40.21:/data/glusterfs/**brick2/scratch       49155 Y
> 7543
> Brick 10.129.40.22:/data/glusterfs/**brick2/scratch       49155 Y
> 27982
> Brick 10.129.40.23:/data/glusterfs/**brick2/scratch       49155 Y
> 7442
> Brick 10.129.40.24:/data/glusterfs/**brick2/scratch       49155 Y
> 19778
> NFS Server on localhost                                 2049 Y       7564
> Self-heal Daemon on localhost                           N/A Y       7569
> NFS Server on 10.129.40.24                              2049 Y       19788
> Self-heal Daemon on 10.129.40.24                        N/A Y       19792
> NFS Server on 10.129.40.23                              2049 Y       7464
> Self-heal Daemon on 10.129.40.23                        N/A Y       7468
> NFS Server on 10.129.40.22                              2049 Y       28004
> Self-heal Daemon on 10.129.40.22                        N/A Y       28008
>
> There are no active volume tasks
> Status of volume: home
> Gluster process                                         Port Online  Pid
> ------------------------------**------------------------------**
> ------------------
> Brick 10.129.40.21:/data/glusterfs/**brick1/home          49152 Y
> 7549
> Brick 10.129.40.22:/data/glusterfs/**brick1/home          49152 Y
> 27989
> Brick 10.129.40.23:/data/glusterfs/**brick1/home          49152 Y
> 7449
> Brick 10.129.40.24:/data/glusterfs/**brick1/home          49152 Y
> 19760
> Brick 10.129.40.21:/data/glusterfs/**brick2/home          49153 Y
> 7554
> Brick 10.129.40.22:/data/glusterfs/**brick2/home          49153 Y
> 27994
> Brick 10.129.40.23:/data/glusterfs/**brick2/home          49153 Y
> 7454
> Brick 10.129.40.24:/data/glusterfs/**brick2/home          49153 Y
> 19766
> NFS Server on localhost                                 2049 Y       7564
> Self-heal Daemon on localhost                           N/A Y       7569
> NFS Server on 10.129.40.24                              2049 Y       19788
> Self-heal Daemon on 10.129.40.24                        N/A Y       19792
> NFS Server on 10.129.40.22                              2049 Y       28004
> Self-heal Daemon on 10.129.40.22                        N/A Y       28008
> NFS Server on 10.129.40.23                              2049 Y       7464
> Self-heal Daemon on 10.129.40.23                        N/A Y       7468
>
> There are no active volume tasks
>
>
> *The gluster volumes are mounted using the glusterfs-fuse package
> (glusterfs-fuse-3.4.0-3.el6.**x86_64) on the clients like so:
> /sbin/mount.glusterfs 10.129.40.21:home /home
> /sbin/mount.glusterfs 10.129.40.21:scratch /scratch
>
>
> *Gluster packages on Gluster servers:
> glusterfs-server-3.4.0-3.el6.**x86_64
> glusterfs-libs-3.4.0-8.el6.**x86_64
> glusterfs-3.4.0-3.el6.x86_64
> glusterfs-geo-replication-3.4.**0-3.el6.x86_64
> glusterfs-fuse-3.4.0-3.el6.**x86_64
> glusterfs-rdma-3.4.0-3.el6.**x86_64
>
>
> *Gluster packages on clients:
> glusterfs-fuse-3.4.0-3.el6.**x86_64
> glusterfs-3.4.0-3.el6.x86_64
>
>
> All clients and servers are running the same OS and kernel:
>
> *uname -a:
> Linux <hostname> 2.6.32-358.6.1.el6.x86_64 #1 SMP Tue Apr 23 16:15:13 CDT
> 2013 x86_64 x86_64 x86_64 GNU/Linux
>
> *cat /etc/redhat-release :
> Scientific Linux release 6.3 (Carbon)
>
>
> Thanks for your help,
>
> Neil Van Lysel
> UNIX Systems Administrator
> Center for High Throughput Computing
> University of Wisconsin - Madison
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Justin Dossey
CTO, PodOmatic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20131015/7d962882/attachment-0001.html>


More information about the Gluster-devel mailing list