[Gluster-devel] io-threads problem? (was: opendir gets Stale NFS file handle)

Niels de Vos ndevos at redhat.com
Tue Sep 30 08:38:06 UTC 2014


On Tue, Sep 30, 2014 at 06:03:44AM +0200, Emmanuel Dreyfus wrote:
> Hello
> 
> I observe this kind of errors in bricks logs:
> [2014-09-30 03:56:10.172889] E
> [server-rpc-fops.c:681:server_opendir_cbk] 0-patchy-server: 11: OPENDIR
> (null) (63a151ad-a8b7-496b-92a8-5c3c7897e6fa) ==> (Stale NFS file
> handle)

ESTALE gets returned when a directory is opened by handle (in this case
the GFID). The posix xlator should do the OPENDIR on the brick, through
the .glusterfs/...GFID... structure.

> Here is the backtrace leading to it. Is that a real error?
> 
> #3  0xb9c45934 in server_opendir_cbk (frame=0xbb235e70, cookie=0x0, 
>     this=0xbb2ca018, op_ret=-1, op_errno=70, fd=0x0, xdata=0x0)
>     at server-rpc-fops.c:682
> #4  0xb9c4d402 in server_opendir_resume (frame=0xbb235e70,
> bound_xl=0xbb2c8018)
>     at server-rpc-fops.c:2507
> #5  0xb9c3ef51 in server_resolve_done (frame=0xbb235e70)
>     at server-resolve.c:557
> #6  0xb9c3f02d in server_resolve_all (frame=0xbb235e70) at
> server-resolve.c:592
> #7  0xb9c3eefa in server_resolve (frame=0xbb235e70) at
> server-resolve.c:541
> #8  0xb9c3f00a in server_resolve_all (frame=0xbb235e70) at
> server-resolve.c:588
> #9  0xb9c3e662 in resolve_continue (frame=0xbb235e70) at
> server-resolve.c:233
> #10 0xb9c3e242 in resolve_gfid_cbk (frame=0xbb235e70, cookie=0xbb287528,
>     this=0xbb2ca018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xdata=0x0, postparent=0xb9b2ccac) at server-resolve.c:171
> #11 0xb9c710a7 in io_stats_lookup_cbk (frame=0xbb287528,
> cookie=0xbb2875b8, 
>     this=0xbb2c8018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xdata=0x0, postparent=0xb9b2ccac) at io-stats.c:1510
> #12 0xb9cbc09f in marker_lookup_cbk (frame=0xbb2875b8,
> cookie=0xbb287648, 
>     this=0xbb2c5018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     dict=0x0, postparent=0xb9b2ccac) at marker.c:2614
> #13 0xbb7667d8 in default_lookup_cbk (frame=0xbb287648,
> cookie=0xbb2876d8, 
>     this=0xbb2c4018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xdata=0x0, postparent=0xb9b2ccac) at defaults.c:841
> #14 0xbb7667d8 in default_lookup_cbk (frame=0xbb2876d8,
> cookie=0xbb287768, 
>     this=0xbb2c2018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xdata=0x0, postparent=0xb9b2ccac) at defaults.c:841
> #15 0xb9cf12ab in pl_lookup_cbk (frame=0xbb287768, cookie=0xbb287888, 
>     this=0xbb2c1018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xdata=0x0, postparent=0xb9b2ccac) at posix.c:2036
> #16 0xb9d03fb0 in posix_acl_lookup_cbk (frame=0xbb287888,
> cookie=0xbb287918, 
>     this=0xbb2c0018, op_ret=-1, op_errno=2, inode=0xbb287498,
> buf=0xb9b2cd14, 
>     xattr=0x0, postparent=0xb9b2ccac) at posix-acl.c:806
> #17 0xb9d30601 in posix_lookup (frame=0xbb287918, this=0xbb2be018, 
>     loc=0xb9910048, xdata=0xbb2432a8) at posix.c:189
> #18 0xbb771646 in default_lookup (frame=0xbb287918, this=0xbb2bf018, 
>     loc=0xb9910048, xdata=0xbb2432a8) at defaults.c:2117
> #19 0xb9d04384 in posix_acl_lookup (frame=0xbb287888, this=0xbb2c0018, 
>     loc=0xb9910048, xattr=0x0) at posix-acl.c:858
> #20 0xb9cf1713 in pl_lookup (frame=0xbb287768, this=0xbb2c1018, 
>     loc=0xb9910048, xdata=0x0) at posix.c:2080
> #21 0xbb76f4da in default_lookup_resume (frame=0xbb2876d8,
> this=0xbb2c2018, 
>     loc=0xb9910048, xdata=0x0) at defaults.c:1683
> #22 0xbb786667 in call_resume_wind (stub=0xb9910028) at call-stub.c:2478
> #23 0xbb78d4f5 in call_resume (stub=0xb9910028) at call-stub.c:2841
> #24 0xbb30402f in iot_worker (
>     data=<error reading variable: Cannot access memory at address
> 0xb9b2cfd8>, 
>     data at entry=<error reading variable: Cannot access memory at address
> 0xb9b2cfd4>) at io-threads.c:214

This error suggests that 'data' can not be accessed. I have no idea why
io-threads would fail here though...

Niels


More information about the Gluster-devel mailing list