[Gluster-devel] making frame->root->unique more effective in debugging hung frames
FNU Raghavendra Manjunath
rabhat at redhat.com
Fri May 24 17:27:16 UTC 2019
The idea looks OK. One of the things that probably need to be considered
(more of an implementation detail though) is how to generate
frame->root->unique.
Because, for fuse, frame->root->unique is obtained by finh->unique which
IIUC is got from the incoming fop from kernel itself.
For protocol/server IIUC frame->root->unique is got from req->xit of the
rpc request, which itself is obtained from transport->xid of the
rpc_transport_t structure (and from my understanding, the transport->xid is
just incremented by everytime a
new rpc request is created).
Overall the suggestion looks fine though.
Regards,
Raghavendra
On Fri, May 24, 2019 at 2:27 AM Pranith Kumar Karampuri <pkarampu at redhat.com>
wrote:
> Hi,
> At the moment new stack doesn't populate frame->root->unique in
> all cases. This makes it difficult to debug hung frames by examining
> successive state dumps. Fuse and server xlator populate it whenever they
> can, but other xlators won't be able to assign one when they need to create
> a new frame/stack. Is it okay to change create_frame() code to always
> populate it with an increasing number for this purpose?
> I checked both fuse and server xlator use it only in gf_log() so it
> doesn't seem like there is any other link between frame->root->unique and
> the functionality of fuse, server xlators.
> Do let me know if I missed anything before sending this change.
>
> --
> Pranith
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190524/c0aac000/attachment.html>
More information about the Gluster-devel
mailing list