[Gluster-devel] Glusterfs distributed volume stops a long time with bailing out frame type(GlusterFS 3.2.7) op(INODELK(29)

Pranith Kumar K pkarampu at redhat.com
Wed Jan 23 05:21:27 UTC 2013


On 01/23/2013 08:33 AM, Song wrote:
>
> Hi,
>
> When application access a directory of glusterfs volume, glusterfs 
> client spend about 1 hour, from 19:44:05 to 20:44:25.
>
> The gluster volume is DHT + AFR, native mount to /xmail/gfs1
>
> Access(/xmail/gfs1/xmail_dedup/gfs1_000/011/204/)
>
> The following message is displayed in client log:
>
> [2013-01-22 19:14:04.849597] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-14: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.849636] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-11: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.849674] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-17: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.849756] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-23: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.849825] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-26: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.849993] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-29: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:14:04.850039] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-20: remote 
> operation failed: No such file or directory
>
> [2013-01-22 19:44:05.36300] E [rpc-clnt.c:197:call_bail] 
> 0-gfs1-client-75: bailing out frame type(GlusterFS 3.1) 
> op(INODELK(29)) xid = 0x1668135x sent = 2013-01-22 19:14:04.843525. 
> timeout = 1800
>
> [2013-01-22 19:44:05.36903] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-75: remote 
> operation failed: Transport endpoint is not connected
>
> [2013-01-22 20:14:15.229286] E [rpc-clnt.c:197:call_bail] 
> 0-gfs1-client-76: bailing out frame type(GlusterFS 3.1) 
> op(INODELK(29)) xid = 0x1693471x sent = 2013-01-22 19:44:05.36960. 
> timeout = 1800
>
> [2013-01-22 20:14:15.229508] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-76: remote 
> operation failed: Transport endpoint is not connected
>
> [2013-01-22 20:44:25.439449] E [rpc-clnt.c:197:call_bail] 
> 0-gfs1-client-77: bailing out frame type(GlusterFS 3.1) 
> op(INODELK(29)) xid = 0x1683252x sent = 2013-01-22 20:14:15.229559. 
> timeout = 1800
>
> [2013-01-22 20:44:25.440202] E 
> [client3_1-fops.c:1226:client3_1_inodelk_cbk] 0-gfs1-client-77: remote 
> operation failed: Transport endpoint is not connected
>
> [2013-01-22 20:44:26.243548] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-18: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.243616] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-20: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.243645] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-19: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.246854] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-18: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.247769] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-19: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.252241] E 
> [client3_1-fops.c:1303:client3_1_entrylk_cbk] 0-gfs1-client-20: remote 
> operation failed: No such file or directory
>
> [2013-01-22 20:44:26.252269] W [fuse-bridge.c:1684:fuse_create_cbk] 
> 0-glusterfs-fuse: 12541819: /xmail_dedup/gfs1_000/011/12D/110 => -1 
> (No such file or directory)
>
> Please advice.
>
> Thanks!
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
Song,
     Could you attach the statedumps of the bricks 75, 76, 77?
You can create statedumps using kill -USR1 <pid of brick-process>. They 
will be created in /tmp

Pranith.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130123/9041334a/attachment-0001.html>


More information about the Gluster-devel mailing list