[Gluster-devel] Question on rpc_transport_unref
Emmanuel Dreyfus
manu at netbsd.org
Fri Apr 26 18:25:56 UTC 2013
Hi
I am trying to track down my problem with bricks disconnecting a client while
stuff is getting done (3.4.0alpha3). It happens on a stack trace as below.
I understand calling free_state()/rpc_transport_unref() is the correct behavior
once an operation is done. Then if the transport reference count drops to zero,
rpc_transport_unref() callls rpc_transport_destroy() and the brick will
disconnect clients.
Is it normal behavior that the reference count drops to zero while the brick is
being used by a client? Is there a reference count problem somewhere?
#2 0x00007f7ff78529d2 in event_unregister_poll (event_pool=0x7f7ff7b490c0,
fd=10, idx_hint=5) at event-poll.c:257
#3 0x00007f7ff3807f87 in __socket_reset (this=0x7f7ff3701000) at socket.c:840
#4 0x00007f7ff380b477 in fini (this=0x7f7ff3701000) at socket.c:3573
#5 0x00007f7ff740a2ae in rpc_transport_destroy (this=0x7f7ff3701000)
at rpc-transport.c:428
#6 0x00007f7ff740a3d8 in rpc_transport_unref (this=0x7f7ff3701000)
at rpc-transport.c:480
#7 0x00007f7ff100cfc8 in free_state (state=0x7f7ff3721000)
at server-helpers.c:79
#8 0x00007f7ff100ac09 in server_submit_reply (frame=0x7f7ff4d041b0,
req=0x7f7ff04063ec, arg=<optimized out>, payload=0x0, payloadcount=0,
iobref=0x7f7ff231a0c0, xdrproc=0x7f7ff700a1f3 <xdr_gf_common_rsp>)
at server.c:182
#9 0x00007f7ff101914d in server_entrylk_cbk (frame=0x7f7ff4d041b0,
cookie=<optimized out>, this=0x7f7ff6fe0000, op_ret=0, op_errno=0,
xdata=<optimized out>) at server-rpc-fops.c:361
#10 0x00007f7ff140be61 in io_stats_entrylk_cbk (frame=0x7f7ff7301b5c,
cookie=<optimized out>, this=<optimized out>, op_ret=0, op_errno=0,
xdata=0x0) at io-stats.c:1741
#11 0x00007f7ff2004e09 in iot_entrylk_cbk (frame=0x7f7ff730188c,
---Type <return> to continue, or q <return> to quit---
cookie=<optimized out>, this=<optimized out>, op_ret=0, op_errno=0,
xdata=<optimized out>) at io-threads.c:2169
#12 0x00007f7ff240e477 in pl_common_entrylk (frame=<optimized out>,
this=0x7f7ff6fdb000, volume=0x7f7ff3703060 "gfs33-replicate-1",
inode=0x7f7fef50e0f8, basename=0x7f7ff3717090 "csh.g.09364q",
cmd=ENTRYLK_UNLOCK, type=ENTRYLK_WRLCK, loc=0x7f7ff4a049fc, fd=0x0)
at entrylk.c:724
#13 0x00007f7ff240eafb in pl_entrylk (frame=<optimized out>,
this=<optimized out>, volume=<optimized out>, loc=<optimized out>,
basename=<optimized out>, cmd=<optimized out>, type=ENTRYLK_WRLCK)
at entrylk.c:746
#14 0x00007f7ff2007c7e in iot_entrylk_wrapper (frame=0x7f7ff730188c,
this=0x7f7ff6fdc000, volume=0x7f7ff3703060 "gfs33-replicate-1",
loc=0x7f7ff4a049fc, basename=0x7f7ff3717090 "csh.g.09364q",
cmd=ENTRYLK_UNLOCK, type=ENTRYLK_WRLCK, xdata=0x0) at io-threads.c:2179
#15 0x00007f7ff7830db7 in call_resume_wind (stub=0x7f7ff4a049bc)
at call-stub.c:2663
#16 call_resume (stub=0x7f7ff4a049bc) at call-stub.c:4142
#17 0x00007f7ff200b8ba in iot_worker (data=0x7f7ff6fec120) at io-threads.c:191
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
manu at netbsd.org
More information about the Gluster-devel
mailing list