[Bugs] [Bug 1221620] New: Bitd crashed on tier volume

bugzilla at redhat.com bugzilla at redhat.com
Thu May 14 12:52:33 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1221620

            Bug ID: 1221620
           Summary: Bitd crashed on tier volume
           Product: GlusterFS
           Version: mainline
         Component: bitrot
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: rmekala at redhat.com
                CC: bugs at gluster.org
      Docs Contact: bugs at gluster.org



Description of problem:
================================
Bitd crashed on tier volume

Version-Release number of selected component (if applicable):
=======================================
glusterfs-server-3.7.0beta2-0.0.el6.x86_64


How reproducible:


Steps to Reproduce:
========================
1.Create EC(4+2) volume and enable bitrot and do some IO
2.After some time attach tier and do IO and after some time bitd crashed
3.

Actual results:


Expected results:
=================
Bitd should not crashed


Additional info:

(gdb) bt
#0  x86_64_fallback_frame_state (context=0x7f2140df8a10, fs=0x7f2140df8890) at
../../../gcc/config/i386/linux-unwind.h:47
#1  uw_frame_state_for (context=0x7f2140df8a10, fs=0x7f2140df8890) at
../../../gcc/unwind-dw2.c:1210
#2  0x0000003ec3010119 in _Unwind_Backtrace (trace=0x3ebc4fe8d0
<backtrace_helper>, trace_argument=0x7f2140df8b50) at
../../../gcc/unwind.inc:290
#3  0x0000003ebc4fea66 in __backtrace (array=<value optimized out>, size=200)
at ../sysdeps/ia64/backtrace.c:110
#4  0x0000003d78424b96 in _gf_msg_backtrace_nomem (level=<value optimized out>,
stacksize=200) at logging.c:1097
#5  0x0000003d784435af in gf_print_trace (signum=11, ctx=0xcd6010) at
common-utils.c:618
#6  <signal handler called>
#7  0x0000000100004b92 in ?? ()
#8  0x00007f2148df8caa in gf_changelog_reborp_rpcsvc_notify (rpc=<value
optimized out>, mydata=0x7f2128001850, event=<value optimized out>, data=<value
optimized out>)
    at gf-changelog-reborp.c:169
#9  0x0000003d78808535 in rpcsvc_handle_disconnect (svc=0x7f2128002c40,
trans=0x7f21440d1700) at rpcsvc.c:754
#10 0x0000003d7880a090 in rpcsvc_notify (trans=0x7f21440d1700, mydata=<value
optimized out>, event=<value optimized out>, data=0x7f21440d1700) at
rpcsvc.c:792
#11 0x0000003d7880b8e8 in rpc_transport_notify (this=<value optimized out>,
event=<value optimized out>, data=<value optimized out>) at rpc-transport.c:543
#12 0x00007f214a07c6a1 in socket_event_poll_err (fd=<value optimized out>,
idx=<value optimized out>, data=0x7f21440d1700, poll_in=<value optimized out>,
poll_out=0, poll_err=16)
    at socket.c:1205
#13 socket_event_handler (fd=<value optimized out>, idx=<value optimized out>,
data=0x7f21440d1700, poll_in=<value optimized out>, poll_out=0, poll_err=16) at
socket.c:2410
#14 0x0000003d78480f80 in event_dispatch_epoll_handler (data=0x7f21440175b0) at
event-epoll.c:572
#15 event_dispatch_epoll_worker (data=0x7f21440175b0) at event-epoll.c:674
#16 0x0000003ebc8079d1 in start_thread (arg=0x7f2140dfa700) at
pthread_create.c:301
#17 0x0000003ebc4e89dd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:115

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
You are the Docs Contact for the bug.


More information about the Bugs mailing list