[Gluster-devel] Regression-test-burn-in crash in EC test

Ashish Pandey aspandey at redhat.com
Thu Apr 28 06:57:14 UTC 2016


Hi Jeff, 

Where can we find the core dump? 

--- 
Ashish 

----- Original Message -----

From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> 
To: "Jeff Darcy" <jdarcy at redhat.com> 
Cc: "Gluster Devel" <gluster-devel at gluster.org>, "Ashish Pandey" <aspandey at redhat.com> 
Sent: Thursday, April 28, 2016 11:58:54 AM 
Subject: Re: [Gluster-devel] Regression-test-burn-in crash in EC test 

Ashish, 
Could you take a look at this? 

Pranith 

----- Original Message ----- 
> From: "Jeff Darcy" <jdarcy at redhat.com> 
> To: "Gluster Devel" <gluster-devel at gluster.org> 
> Sent: Wednesday, April 27, 2016 11:31:25 PM 
> Subject: [Gluster-devel] Regression-test-burn-in crash in EC test 
> 
> One of the "rewards" of reviewing and merging people's patches is getting 
> email if the next regression-test-burn-in should fail - even if it fails for 
> a completely unrelated reason. Today I got one that's not among the usual 
> suspects. The failure was a core dump in tests/bugs/disperse/bug-1304988.t, 
> weighing in at a respectable 42 frames. 
> 
> #0 0x00007fef25976cb9 in dht_rename_lock_cbk 
> #1 0x00007fef25955f62 in dht_inodelk_done 
> #2 0x00007fef25957352 in dht_blocking_inodelk_cbk 
> #3 0x00007fef32e02f8f in default_inodelk_cbk 
> #4 0x00007fef25c029a3 in ec_manager_inodelk 
> #5 0x00007fef25bf9802 in __ec_manager 
> #6 0x00007fef25bf990c in ec_manager 
> #7 0x00007fef25c03038 in ec_inodelk 
> #8 0x00007fef25bee7ad in ec_gf_inodelk 
> #9 0x00007fef25957758 in dht_blocking_inodelk_rec 
> #10 0x00007fef25957b2d in dht_blocking_inodelk 
> #11 0x00007fef2597713f in dht_rename_lock 
> #12 0x00007fef25977835 in dht_rename 
> #13 0x00007fef32e0f032 in default_rename 
> #14 0x00007fef32e0f032 in default_rename 
> #15 0x00007fef32e0f032 in default_rename 
> #16 0x00007fef32e0f032 in default_rename 
> #17 0x00007fef32e0f032 in default_rename 
> #18 0x00007fef32e07c29 in default_rename_resume 
> #19 0x00007fef32d8ed40 in call_resume_wind 
> #20 0x00007fef32d98b2f in call_resume 
> #21 0x00007fef24cfc568 in open_and_resume 
> #22 0x00007fef24cffb99 in ob_rename 
> #23 0x00007fef24aee482 in mdc_rename 
> #24 0x00007fef248d68e5 in io_stats_rename 
> #25 0x00007fef32e0f032 in default_rename 
> #26 0x00007fef2ab1b2b9 in fuse_rename_resume 
> #27 0x00007fef2ab12c47 in fuse_fop_resume 
> #28 0x00007fef2ab107cc in fuse_resolve_done 
> #29 0x00007fef2ab108a2 in fuse_resolve_all 
> #30 0x00007fef2ab10900 in fuse_resolve_continue 
> #31 0x00007fef2ab0fb7c in fuse_resolve_parent 
> #32 0x00007fef2ab1077d in fuse_resolve 
> #33 0x00007fef2ab10879 in fuse_resolve_all 
> #34 0x00007fef2ab10900 in fuse_resolve_continue 
> #35 0x00007fef2ab0fb7c in fuse_resolve_parent 
> #36 0x00007fef2ab1077d in fuse_resolve 
> #37 0x00007fef2ab10824 in fuse_resolve_all 
> #38 0x00007fef2ab1093e in fuse_resolve_and_resume 
> #39 0x00007fef2ab1b40e in fuse_rename 
> #40 0x00007fef2ab2a96a in fuse_thread_proc 
> #41 0x00007fef3204daa1 in start_thread 
> 
> In other words we started at FUSE, went through a bunch of performance 
> translators, through DHT to EC, and then crashed on the way back. It seems 
> a little odd that we turn the fop around immediately in EC, and that we have 
> default_inodelk_cbk at frame 3. Could one of the DHT or EC people please 
> take a look at it? Thanks! 
> 
> 
> https://build.gluster.org/job/regression-test-burn-in/868/console 
> _______________________________________________ 
> Gluster-devel mailing list 
> Gluster-devel at gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-devel 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160428/dfe67875/attachment-0001.html>


More information about the Gluster-devel mailing list