[Bugs] [Bug 1767264] New: glusterfs client process coredump

bugzilla at redhat.com bugzilla at redhat.com
Thu Oct 31 02:37:58 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1767264

            Bug ID: 1767264
           Summary: glusterfs client process coredump
           Product: GlusterFS
           Version: 7
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: logging
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: zz.sh.cynthia at gmail.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Created attachment 1630869
  --> https://bugzilla.redhat.com/attachment.cgi?id=1630869&action=edit
glusterfs process trace level log

Description of problem:

glusterfs client process coredump
Version-Release number of selected component (if applicable):

glusterfs 7
How reproducible:


Steps to Reproduce:
1.user begin to do io by ior tool
2.trigger statedump for glusterfs process each 60 seconds
3.glusterfs process coredump

Actual results:


Expected results:


Additional info:
1>gdb info

[New LWP 6471]
[New LWP 6472]
[New LWP 6464]
[New LWP 6494]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfs --acl --process-name fuse
--volfile-server=mn-0.local --vol'.
Program terminated with signal SIGABRT, Aborted.
#0  0x00007f71904b157f in raise () from /lib64/libc.so.6
[Current thread is 1 (Thread 0x7f718efd1700 (LWP 6465))]
Missing separate debuginfos, use: dnf debuginfo-install
glibc-2.28-33.wf30.x86_64 libgcc-8.3.1-2.wf30.x86_64
libtirpc-1.1.4-0.wf30.x86_64 libuuid-2.33.2-2.wf30.x86_64
zlib-1.2.11-15.wf30.x86_64
(gdb) bt
#0  0x00007f71904b157f in raise () from /lib64/libc.so.6
#1  0x00007f719049b895 in abort () from /lib64/libc.so.6
#2  0x00007f71904f49d7 in __libc_message () from /lib64/libc.so.6
#3  0x00007f71904f4a9a in __libc_fatal () from /lib64/libc.so.6
#4  0x00007f719064f98d in pthread_cond_timedwait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#5  0x00007f7190729911 in gf_timer_proc (data=0x55aa65bc1f20) at timer.c:140
#6  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#7  0x00007f7190576703 in clone () from /lib64/libc.so.6
(gdb) apply all bt
Undefined command: "apply".  Try "help".
(gdb) thread apply all bt

Thread 12 (Thread 0x7f718a894700 (LWP 6494)):
#0  0x00007f719056d15f in readv () from /lib64/libc.so.6
#1  0x00007f7190746759 in sys_readv (fd=<optimized out>,
iov=iov at entry=0x7f718a893150, iovcnt=iovcnt at entry=2) at syscall.c:328
#2  0x00007f718eff4a93 in fuse_thread_proc (data=0x55aa65bb0eb0) at
fuse-bridge.c:5957
#3  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#4  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7f718ffed480 (LWP 6464)):
#0  0x00007f719064aa6d in __pthread_timedjoin_ex () from /lib64/libpthread.so.0
#1  0x00007f719077b4c7 in event_dispatch_epoll (event_pool=0x55aa65ba6760) at
event-epoll.c:840
#2  0x000055aa656fc5d0 in main (argc=<optimized out>, argv=<optimized out>) at
glusterfsd.c:2919

Thread 10 (Thread 0x7f7183fff700 (LWP 6472)):
#0  0x00007f7190652b4c in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f719064bda8 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2  0x00007f719071d6df in gf_log_glusterlog (ctx=0x55aa65b70260,
domain=domain at entry=0x7f718b63800f "stack-trace",
file=file at entry=0x7f718b63a027 "afr-transaction.c",
function=function at entry=0x7f718b63a820 <__FUNCTION__.21496> "afr_changelog_do",
line=line at entry=1959, level=level at entry=GF_LOG_TRACE, errnum=0, msgid=0,
appmsgstr=0x7f7183ffd698, callstr=0x0, tv=..., graph_id=0,
fmt=gf_logformat_withmsgid) at logging.c:1364
#3  0x00007f719071dadd in gf_log_print_plain_fmt (ctx=ctx at entry=0x55aa65b70260,
domain=domain at entry=0x7f718b63800f "stack-trace",
file=file at entry=0x7f718b63a027 "afr-transaction.c",
function=function at entry=0x7f718b63a820 <__FUNCTION__.21496> "afr_changelog_do",
line=line at entry=1959, level=level at entry=GF_LOG_TRACE, errnum=<optimized out>,
msgid=<optimized out>, appmsgstr=<optimized out>, callstr=<optimized out>,
tv=..., graph_id=<optimized out>, fmt=<optimized out>) at logging.c:1586
#4  0x00007f719071c23a in _gf_msg_internal (graph_id=0, callstr=<optimized
out>, appmsgstr=0x7f7183ffd698, msgid=0, errnum=0, level=GF_LOG_TRACE,
line=1959, function=0x7f718b63a820 <__FUNCTION__.21496> "afr_changelog_do",
file=0x7f718b63a027 "afr-transaction.c", domain=0x7f718b63800f "stack-trace")
at logging.c:1926
#5  _gf_msg (domain=domain at entry=0x7f718b63800f "stack-trace",
file=file at entry=0x7f718b63a027 "afr-transaction.c",
function=function at entry=0x7f718b63a820 <__FUNCTION__.21496> "afr_changelog_do",
line=line at entry=1959, level=level at entry=GF_LOG_TRACE, errnum=errnum at entry=0,
trace=0, msgid=0, fmt=0x7f718b6380c8 "stack-address: %p, winding from %s to
%s") at logging.c:2004
#6  0x00007f718b5fb109 in afr_changelog_do (frame=frame at entry=0x7f7184061498,
this=this at entry=0x7f717c0115a0, xattr=xattr at entry=0x7f717c0c2a78,
changelog_resume=changelog_resume at entry=0x7f718b5f9090
<afr_changelog_post_op_done>, op=op at entry=AFR_TRANSACTION_POST_OP) at
afr-transaction.c:1956
#7  0x00007f718b5fca44 in afr_changelog_post_op_do (frame=0x7f7184061498,
this=0x7f717c0115a0) at afr-transaction.c:1562
#8  0x00007f718b5fd3ff in afr_changelog_post_op_now (frame=0x7f7184061498,
this=0x7f717c0115a0) at afr-transaction.c:1593
#9  0x00007f718b5fd580 in afr_delayed_changelog_wake_up_cbk
(data=data at entry=0x7f716c008748) at afr-transaction.c:2502
#10 0x00007f718b621645 in afr_delayed_changelog_wake_resume
(this=this at entry=0x7f717c0115a0, inode=0x7f717c0bb8e8, stub=0x7f717c0c46f8) at
afr-common.c:3673
#11 0x00007f718b626a9c in afr_flush (frame=frame at entry=0x7f717c0d3948,
this=this at entry=0x7f717c0115a0, fd=fd at entry=0x7f7174013e58,
xdata=xdata at entry=0x0) at afr-common.c:3701
#12 0x00007f71907a7de6 in default_flush (frame=frame at entry=0x7f717c0d3948,
this=this at entry=0x7f717c013cd0, fd=fd at entry=0x7f7174013e58,
xdata=xdata at entry=0x0) at defaults.c:2531
#13 0x00007f71907a7de6 in default_flush (frame=frame at entry=0x7f717c0d3948,
this=<optimized out>, fd=fd at entry=0x7f7174013e58, xdata=xdata at entry=0x0) at
defaults.c:2531
#14 0x00007f718b4efbea in wb_flush_helper (frame=0x7f71740418b8,
this=0x7f717c017730, fd=0x7f7174013e58, xdata=0x0) at write-behind.c:1996
#15 0x00007f7190741435 in call_resume_keep_stub (stub=0x7f7174010fb8) at
call-stub.c:2621
#16 0x00007f718b4f3845 in wb_do_winds (wb_inode=wb_inode at entry=0x7f716c0032e0,
tasks=tasks at entry=0x7f7183ffdae0) at write-behind.c:1744
#17 0x00007f718b4f397d in wb_process_queue
(wb_inode=wb_inode at entry=0x7f716c0032e0) at write-behind.c:1781
#18 0x00007f718b4f3b14 in wb_fulfill_cbk (frame=frame at entry=0x7f717c0bf358,
cookie=<optimized out>, this=<optimized out>, op_ret=op_ret at entry=1000,
op_errno=op_errno at entry=0, prebuf=prebuf at entry=0x7f716c015118--Type <RET> for
more, q to quit, c to continue without paging--
, postbuf=<optimized out>, xdata=<optimized out>) at write-behind.c:1108
#19 0x00007f718bf9bb92 in gf_utime_writev_cbk (frame=0x7f717c08ca08,
cookie=<optimized out>, this=<optimized out>, op_ret=1000, op_errno=0,
prebuf=0x7f716c015118, postbuf=0x7f716c0151b8, xdata=0x7f718404fe98) at
utime-autogen-fops.c:63
#20 0x00007f718b5ea41d in afr_writev_unwind (frame=frame at entry=0x7f716c0185b8,
this=this at entry=0x7f717c0115a0) at afr-inode-write.c:228
#21 0x00007f718b5ea9d4 in afr_writev_wind_cbk (cookie=<optimized out>,
op_ret=<optimized out>, op_errno=<optimized out>, prebuf=<optimized out>,
postbuf=0x7f7183ffdd80, xdata=0x7f717c0d44d8, this=0x7f717c0115a0,
frame=0x7f7184061498) at afr-inode-write.c:382
#22 afr_writev_wind_cbk (frame=0x7f7184061498, cookie=<optimized out>,
this=0x7f717c0115a0, op_ret=<optimized out>, op_errno=<optimized out>,
prebuf=<optimized out>, postbuf=0x7f7183ffdd80, xdata=0x7f717c0d44d8) at
afr-inode-write.c:348
#23 0x00007f718b6af308 in client4_0_writev_cbk (req=<optimized out>,
iov=<optimized out>, count=<optimized out>, myframe=0x7f718406a7d8) at
client-rpc-fops_v2.c:684
#24 0x00007f71906c1e9e in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f717c05a460, pollin=pollin at entry=0x7f717c062ab0) at
rpc-clnt.c:770
#25 0x00007f71906c2283 in rpc_clnt_notify (trans=0x7f717c05a720,
mydata=0x7f717c05a490, event=<optimized out>, data=0x7f717c062ab0) at
rpc-clnt.c:938
#26 0x00007f71906bee23 in rpc_transport_notify (this=this at entry=0x7f717c05a720,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f717c062ab0)
at rpc-transport.c:545
#27 0x00007f718bfaee38 in socket_event_poll_in_async (xl=<optimized out>,
async=0x7f717c062bd8) at socket.c:2601
#28 0x00007f718bfb7b1c in gf_async (cbk=0x7f718bfaee10
<socket_event_poll_in_async>, xl=<optimized out>, async=<optimized out>) at
../../../../libglusterfs/src/glusterfs/async.h:189
#29 socket_event_poll_in (notify_handled=true, this=0x7f717c05a720) at
socket.c:2642
#30 socket_event_handler (event_thread_died=0 '\000', poll_err=0,
poll_out=<optimized out>, poll_in=<optimized out>, data=0x7f717c05a720, gen=4,
idx=3, fd=2080746960) at socket.c:3033
#31 socket_event_handler (fd=fd at entry=7, idx=idx at entry=3, gen=gen at entry=4,
data=data at entry=0x7f717c05a720, poll_in=<optimized out>, poll_out=<optimized
out>, poll_err=0, event_thread_died=0 '\000') at socket.c:2960
#32 0x00007f719077be0b in event_dispatch_epoll_handler (event=0x7f7183ffe154,
event_pool=0x55aa65ba6760) at event-epoll.c:642
#33 event_dispatch_epoll_worker (data=0x55aa65bf02d0) at event-epoll.c:755
#34 0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#35 0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f718beed700 (LWP 6471)):
#0  0x00007f7190652b4c in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f719064bda8 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2  0x00007f719071d6df in gf_log_glusterlog (ctx=0x55aa65b70260,
domain=domain at entry=0x7f718b6cc03f "stack-trace",
file=file at entry=0x7f718b6d10b0 "client-rpc-fops_v2.c",
function=function at entry=0x7f718b6d1b80 <__FUNCTION__.23859>
"client4_0_create_cbk", line=line at entry=2119, level=level at entry=GF_LOG_TRACE,
errnum=0, msgid=0, appmsgstr=0x7f718beeba08, callstr=0x0, tv=..., graph_id=0,
fmt=gf_logformat_withmsgid) at logging.c:1364
#3  0x00007f719071dadd in gf_log_print_plain_fmt (ctx=ctx at entry=0x55aa65b70260,
domain=domain at entry=0x7f718b6cc03f "stack-trace",
file=file at entry=0x7f718b6d10b0 "client-rpc-fops_v2.c",
function=function at entry=0x7f718b6d1b80 <__FUNCTION__.23859>
"client4_0_create_cbk", line=line at entry=2119, level=level at entry=GF_LOG_TRACE,
errnum=<optimized out>, msgid=<optimized out>, appmsgstr=<optimized out>,
callstr=<optimized out>, tv=..., graph_id=<optimized out>, fmt=<optimized out>)
at logging.c:1586
#4  0x00007f719071c23a in _gf_msg_internal (graph_id=0, callstr=<optimized
out>, appmsgstr=0x7f718beeba08, msgid=0, errnum=0, level=GF_LOG_TRACE,
line=2119, function=0x7f718b6d1b80 <__FUNCTION__.23859> "client4_0_create_cbk",
file=0x7f718b6d10b0 "client-rpc-fops_v2.c", domain=0x7f718b6cc03f
"stack-trace") at logging.c:1926
#5  _gf_msg (domain=domain at entry=0x7f718b6cc03f "stack-trace",
file=file at entry=0x7f718b6d10b0 "client-rpc-fops_v2.c",
function=function at entry=0x7f718b6d1b80 <__FUNCTION__.23859>
"client4_0_create_cbk", line=line at entry=2119, level=level at entry=GF_LOG_TRACE,
errnum=errnum at entry=0, trace=0, msgid=0, fmt=0x7f718b6cc8e8 "stack-address: %p,
%s returned %d") at logging.c:2004
#6  0x00007f718b6b110a in client4_0_create_cbk (req=0x7f7184021a38,
iov=<optimized out>, count=<optimized out>, myframe=0x7f718403fb18) at
client-rpc-fops_v2.c:2117
#7  0x00007f71906c1e9e in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7f717c05dd60, pollin=pollin at entry=0x7f718401e760) at
rpc-clnt.c:770
#8  0x00007f71906c2283 in rpc_clnt_notify (trans=0x7f717c05e020,
mydata=0x7f717c05dd90, event=<optimized out>, data=0x7f718401e760) at
rpc-clnt.c:938
#9  0x00007f71906bee23 in rpc_transport_notify (this=this at entry=0x7f717c05e020,
event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f718401e760)
at rpc-transport.c:545
--Type <RET> for more, q to quit, c to continue without paging--
#10 0x00007f718bfaee38 in socket_event_poll_in_async (xl=<optimized out>,
async=0x7f718401e888) at socket.c:2601
#11 0x00007f718bfb7b1c in gf_async (cbk=0x7f718bfaee10
<socket_event_poll_in_async>, xl=<optimized out>, async=<optimized out>) at
../../../../libglusterfs/src/glusterfs/async.h:189
#12 socket_event_poll_in (notify_handled=true, this=0x7f717c05e020) at
socket.c:2642
#13 socket_event_handler (event_thread_died=0 '\000', poll_err=0,
poll_out=<optimized out>, poll_in=<optimized out>, data=0x7f717c05e020, gen=1,
idx=4, fd=2080761552) at socket.c:3033
#14 socket_event_handler (fd=fd at entry=10, idx=idx at entry=4, gen=gen at entry=1,
data=data at entry=0x7f717c05e020, poll_in=<optimized out>, poll_out=<optimized
out>, poll_err=0, event_thread_died=0 '\000') at socket.c:2960
#15 0x00007f719077be0b in event_dispatch_epoll_handler (event=0x7f718beec154,
event_pool=0x55aa65ba6760) at event-epoll.c:642
#16 event_dispatch_epoll_worker (data=0x55aa65bf0130) at event-epoll.c:755
#17 0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#18 0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f718a093700 (LWP 6495)):
#0  0x00007f719064f6ec in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x00007f718efdbf73 in timed_response_loop (data=<optimized out>) at
fuse-bridge.c:4913
#2  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#3  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f718d7ce700 (LWP 6468)):
#0  0x00007f719064fa3b in pthread_cond_timedwait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x00007f7190759006 in syncenv_task (proc=proc at entry=0x55aa65bc2440) at
syncop.c:517
#2  0x00007f7190759c50 in syncenv_processor (thdata=0x55aa65bc2440) at
syncop.c:584
#3  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#4  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f718c7cc700 (LWP 6470)):
#0  0x00007f719056dd5f in select () from /lib64/libc.so.6
#1  0x00007f71907937cd in runner (arg=0x55aa65bc65f0) at
../../contrib/timer-wheel/timer-wheel.c:186
#2  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#3  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f718cfcd700 (LWP 6469)):
#0  0x00007f719064fa3b in pthread_cond_timedwait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x00007f7190759006 in syncenv_task (proc=proc at entry=0x55aa65bc2820) at
syncop.c:517
#2  0x00007f7190759c50 in syncenv_processor (thdata=0x55aa65bc2820) at
syncop.c:584
#3  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#4  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f718dfcf700 (LWP 6467)):
#0  0x00007f7190542878 in nanosleep () from /lib64/libc.so.6
#1  0x00007f719054277e in sleep () from /lib64/libc.so.6
--Type <RET> for more, q to quit, c to continue without paging--
#2  0x00007f71907449ad in pool_sweeper (arg=<optimized out>) at mem-pool.c:446
#3  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#4  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f7189892700 (LWP 6497)):
#0  0x00007f719064f6ec in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x00007f718efdb15b in notify_kernel_loop (data=<optimized out>) at
fuse-bridge.c:4828
#2  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#3  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f718e7d0700 (LWP 6466)):
#0  0x00007f71904b234c in sigtimedwait () from /lib64/libc.so.6
#1  0x00007f7190653bbc in sigwait () from /lib64/libpthread.so.0
#2  0x000055aa656fcbe3 in glusterfs_sigwaiter (arg=<optimized out>) at
glusterfsd.c:2416
#3  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#4  0x00007f7190576703 in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f718efd1700 (LWP 6465)):
#0  0x00007f71904b157f in raise () from /lib64/libc.so.6
#1  0x00007f719049b895 in abort () from /lib64/libc.so.6
#2  0x00007f71904f49d7 in __libc_message () from /lib64/libc.so.6
#3  0x00007f71904f4a9a in __libc_fatal () from /lib64/libc.so.6
#4  0x00007f719064f98d in pthread_cond_timedwait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#5  0x00007f7190729911 in gf_timer_proc (data=0x55aa65bc1f20) at timer.c:140
#6  0x00007f719064958b in start_thread () from /lib64/libpthread.so.0
#7  0x00007f7190576703 in clone () from /lib64/libc.so.6
(gdb) 
(gdb) 






2> trace log of glusterfs process
[2019-10-31 01:56:19.727981] T [socket.c:2993:socket_event_handler]
0-ccs-client-1: client (sock:7) in:1, out:0, err:0
[2019-10-31 01:56:19.727992] T [socket.c:3019:socket_event_handler]
0-ccs-client-1: Client socket (7) is already connected
[2019-10-31 01:56:19.728000] T [socket.c:574:__socket_ssl_readv]
0-ccs-client-1: ***** reading over non-SSL
[2019-10-31 01:56:19.728010] T [socket.c:574:__socket_ssl_readv]
0-ccs-client-1: ***** reading over non-SSL
[2019-10-31 01:56:19.728037] T [rpc-clnt.c:663:rpc_clnt_reply_init]
0-ccs-client-1: received rpc message (RPC XID: 0x1888d Program: GlusterFS 4.x
v1, ProgVers: 400, Proc: 13) from rpc-transport (ccs-client-1)
[2019-10-31 01:56:19.728058] T [MSGID: 0]
[client-rpc-fops_v2.c:686:client4_0_writev_cbk] 0-stack-trace: stack-address:
0x7f716c011d78, ccs-client-1 returned 1000
[2019-10-31 01:56:19.728091] T [MSGID: 0]
[afr-inode-write.c:230:afr_writev_unwind] 0-stack-trace: stack-address:
0x7f716c030bd8, ccs-replicate-0 returned 1000
[2019-10-31 01:56:19.728104] T [MSGID: 0]
[utime-autogen-fops.c:63:gf_utime_writev_cbk] 0-stack-trace: stack-address:
0x7f716c030bd8, ccs-utime returned 1000
[2019-10-31 01:56:19.728204] D [write-behind.c:754:__wb_fulfill_request] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f719071cccf] (-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0x87cb)[0x7f718b4f17cb]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0x8b50)[0x7f718b4f1b50]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0xaca8)[0x7f718b4f3ca8]
(--> /usr/lib64/glusterfs/7.0/xlator/features/utime.so(+0x3b92)[0x7f718bf9bb92]
))))) 0-ccs-write-behind: (unique=88403, fop=WRITE,
gfid=dff7c9b9-460e-48a0-b814-90f1344e5383, gen=0): request fulfilled. removing
the request from liability queue? = yes
[2019-10-31 01:56:19.728338] D [write-behind.c:419:__wb_request_unref] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f719071cccf] (-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0x367a)[0x7f718b4ec67a]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0x8816)[0x7f718b4f1816]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0x8b50)[0x7f718b4f1b50]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0xaca8)[0x7f718b4f3ca8]
))))) 0-ccs-write-behind: (unique = 88403, fop=WRITE,
gfid=dff7c9b9-460e-48a0-b814-90f1344e5383, gen=0): destroying request, removing
from all queues
[2019-10-31 01:56:19.728445] D [write-behind.c:1765:wb_process_queue] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f719071cccf] (-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0xa9cc)[0x7f718b4f39cc]
(-->
/usr/lib64/glusterfs/7.0/xlator/performance/write-behind.so(+0xab14)[0x7f718b4f3b14]
(--> /usr/lib64/glusterfs/7.0/xlator/features/utime.so(+0x3b92)[0x7f718bf9bb92]
(-->
/usr/lib64/glusterfs/7.0/xlator/cluster/replicate.so(+0x2741d)[0x7f718b5ea41d]
))))) 0-ccs-write-behind: processing queues
[2019-10-31 01:56:19.728528] D [MSGID: 0] [write-behind.c:1717:__wb_pick_winds]
0-ccs-write-behind: (unique=88405, fop=FLUSH,
gfid=dff7c9b9-460e-48a0-b814-90f1344e5383, gen=1): picking the request for
winding
[2019-10-31 01:56:19.728547] T [MSGID: 0] [write-behind.c:1997:wb_flush_helper]
0-stack-trace: stack-address: 0x7f717c0617b8, winding from ccs-write-behind to
ccs-utime
[2019-10-31 01:56:19.728560] T [MSGID: 0] [defaults.c:2533:default_flush]
0-stack-trace: stack-address: 0x7f717c0617b8, winding from ccs-utime to ccs-dht
[2019-10-31 01:56:19.728572] T [MSGID: 0] [defaults.c:2533:default_flush]
0-stack-trace: stack-address: 0x7f717c0617b8, winding from ccs-dht to
ccs-replicate-0
[2019-10-31 01:56:19.728604] T [MSGID: 0]
[afr-transaction.c:1959:afr_changelog_do] 0-stack-trace: stack-address:
0x7f716c011d78, winding from ccs-replicate-0 to ccs-client-0
[2019-10-31 01:56:19.728620] D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2019-10-31 01:56:19.728635] D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr]
0-dict: key 'trusted.afr.ccs-client-2' would not be sent on wire in the future
[Invalid argument]
[2019-10-31 01:56:19.728667] D [logging.c:1690:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log messages
[2019-10-31 01:56:19.728661] D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr]
0-dict: key 'trusted.afr.ccs-client-1' would not be sent on wire in the future
[Invalid argument]
[2019-10-31 01:56:19.728742] D [MSGID: 101016] [glusterfs3.h:781:dict_to_xdr]
0-dict: key 'trusted.afr.ccs-client-0' would not be sent on wire in the future
[Invalid argument]
[2019-10-31 01:56:19.728766] T [rpc-clnt.c:1451:rpc_clnt_record_build_header]
0-rpc-clnt: Request fraglen 336, payload: 252, rpc hdr: 84
[2019-10-31 01:56:19.728487] T [rpc-clnt.c:1738:rpc_clnt_submit] 0-rpc-clnt:
submitted request (unique: 88406, XID: 0x1891b, Program: GlusterFS 4.x v1,
ProgVers: 400, Proc: 23) to rpc-transport (ccs-client-0)
[2019-10-31 01:56:19.728788] T [MSGID: 0] [afr-dir-write.c:422:afr_create_wind]
0-stack-trace: stack-address: 0x7f717803afb8, winding from ccs-replicate-0 to
ccs-client-1
[2019-10-31 01:56:19.728807] T [rpc-clnt.c:1451:rpc_clnt_record_build_header]
0-rpc-clnt: Request fraglen 180, payload: 92, rpc hdr: 88
[2019-10-31 01:56:19.728821] D [logging.c:1693:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
pending frames:
frame : type(1) op(CREATE)
frame : type(1) op(CREATE)
frame : type(1) op(FLUSH)
frame : type(1) op(FLUSH)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 6
[2019-10-31 01:56:19.728833] T [rpc-clnt.c:1738:rpc_clnt_submit] 0-rpc-clnt:
submitted request (unique: 88406, XID: 0x1888e, Program: GlusterFS 4.x v1,
ProgVers: 400, Proc: 23) to rpc-transport (ccs-client-1)
[2019-10-31 01:56:19.729249] T [MSGID: 0] [afr-dir-write.c:422:afr_create_wind]
0-stack-trace: stack-address: 0x7f717803afb8, winding from ccs-replicate-0 to
ccs-client-2
[2019-10-31 01:56:19.729273] T [rpc-clnt.c:1451:rpc_clnt_record_build_header]
0-rpc-clnt: Request fraglen 180, payload: 92, rpc hdr: 88
[2019-10-31 01:56:19.729299] T [rpc-clnt.c:1738:rpc_clnt_submit] 0-rpc-clnt:
submitted request (unique: 88406, XID: 0x188cd, Program: GlusterFS 4.x v1,
ProgVers: 400, Proc: 23) to rpc-transport (ccs-client-2)
[2019-10-31 01:56:19.729315] T [socket.c:3037:socket_event_handler]
0-ccs-client-1: (sock:7) socket_event_poll_in returned 0
[2019-10-31 01:56:19.729326] T [socket.c:2993:socket_event_handler]
0-ccs-client-0: client (sock:10) in:1, out:0, err:0
[2019-10-31 01:56:19.729334] T [socket.c:3019:socket_event_handler]
0-ccs-client-0: Client socket (10) is already connected
[2019-10-31 01:56:19.729341] T [socket.c:574:__socket_ssl_readv]
0-ccs-client-0: ***** reading over non-SSL
[2019-10-31 01:56:19.729405] T [socket.c:574:__socket_ssl_readv]
0-ccs-client-0: ***** reading over non-SSL
[2019-10-31 01:56:19.728901] T [rpc-clnt.c:1738:rpc_clnt_submit] 0-rpc-clnt:
submitted request (unique: 88404, XID: 0x1891c, Program: GlusterFS 4.x v1,
ProgVers: 400, Proc: 34) to rpc-transport (ccs-client-0)
time of crash:
2019-10-31 01:56:19
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 7.0
[2019-10-31 01:56:19.729478] T [rpc-clnt.c:663:rpc_clnt_reply_init]
0-ccs-client-0: received rpc message (RPC XID: 0x1891b Program: GlusterFS 4.x
v1, ProgVers: 400, Proc: 23) from rpc-transport (ccs-client-0)
/lib64/libglusterfs.so.0(+0x27c80)[0x7f719071ac80]
/lib64/libglusterfs.so.0(gf_print_trace+0x323)[0x7f7190725483]
/lib64/libc.so.6(+0x38600)[0x7f71904b1600]
/lib64/libc.so.6(gsignal+0x10f)[0x7f71904b157f]
/lib64/libc.so.6(abort+0x127)[0x7f719049b895]
/lib64/libc.so.6(+0x7b9d7)[0x7f71904f49d7]
/lib64/libc.so.6(__libc_fatal+0x2a)[0x7f71904f4a9a]
/lib64/libpthread.so.0(pthread_cond_timedwait+0x1ad)[0x7f719064f98d]
/lib64/libglusterfs.so.0(+0x36911)[0x7f7190729911]
/lib64/libpthread.so.0(+0x858b)[0x7f719064958b]
/lib64/libc.so.6(clone+0x43)[0x7f7190576703]

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list