[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1569
jenkins at build.gluster.org
jenkins at build.gluster.org
Thu Nov 28 19:02:34 UTC 2019
See <https://build.gluster.org/job/regression-test-with-multiplex/1569/display/redirect?page=changes>
Changes:
[Ravishankar N] afr: make heal info lockless
------------------------------------------
[...truncated 3.76 MB...]
myindex = 1
timetodie = 0
gen = 0
poller_death_notify = {next = 0x0, prev = 0x0}
slot = 0x0
tmp = 0x0
__FUNCTION__ = "event_dispatch_epoll_worker"
#2 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 8 (Thread 0x7f933442a700 (LWP 25249)):
#0 0x00007f933f4c1da2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f93414de535 in gf_timer_proc (data=0xa5e9a0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/timer.c>:140
now = {tv_sec = 2468282, tv_nsec = 406152304}
reg = 0xa5e9a0
event = 0xa654a0
tmp = 0x0
old_THIS = 0x0
#2 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 7 (Thread 0x7f933b956700 (LWP 25247)):
#0 0x00007f933f4c1da2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f934151ed31 in syncenv_task (proc=0xa58ea0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:517
env = 0xa58ea0
task = 0x0
sleep_till = {tv_sec = 1574959310, tv_nsec = 0}
ret = 0
#2 0x00007f934151ef26 in syncenv_processor (thdata=0xa58ea0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:584
env = 0xa58ea0
proc = 0xa58ea0
task = 0x7f93200079c0
#3 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 6 (Thread 0x7f9333a26700 (LWP 25250)):
#0 0x00007f933f4befd7 in pthread_join () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f9341548c4d in event_dispatch_epoll (event_pool=0xa53d80) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/event-epoll.c>:848
i = 2
t_id = 140270160488192
pollercount = 2
ret = 0
ev_data = 0x7f932c000c00
__FUNCTION__ = "event_dispatch_epoll"
#2 0x00007f9341504ca6 in gf_event_dispatch (event_pool=0xa53d80) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/event.c>:115
ret = -1
__FUNCTION__ = "gf_event_dispatch"
#3 0x00007f934077824c in glfs_poller (data=0x9dc270) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/api/src/glfs.c>:730
fs = 0x9dc270
#4 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#5 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 5 (Thread 0x7f933b155700 (LWP 25248)):
#0 0x00007f933f4c1da2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f934151ed31 in syncenv_task (proc=0xa59260) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:517
env = 0xa58ea0
task = 0x0
sleep_till = {tv_sec = 1574959310, tv_nsec = 0}
ret = 0
#2 0x00007f934151ef26 in syncenv_processor (thdata=0xa59260) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:584
env = 0xa58ea0
proc = 0xa59260
task = 0x0
#3 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 4 (Thread 0x7f933cd57700 (LWP 25246)):
#0 0x00007f933f4c1da2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f934151ed31 in syncenv_task (proc=0xa19f10) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:517
env = 0xa19b50
task = 0x0
sleep_till = {tv_sec = 1574959309, tv_nsec = 0}
ret = 0
#2 0x00007f934151ef26 in syncenv_processor (thdata=0xa19f10) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:584
env = 0xa19b50
proc = 0xa19f10
task = 0x0
#3 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 3 (Thread 0x7f933d558700 (LWP 25245)):
#0 0x00007f933f4c1da2 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f934151ed31 in syncenv_task (proc=0xa19b50) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:517
env = 0xa19b50
task = 0x0
sleep_till = {tv_sec = 1574959309, tv_nsec = 0}
ret = 0
#2 0x00007f934151ef26 in syncenv_processor (thdata=0xa19b50) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:584
env = 0xa19b50
proc = 0xa19b50
task = 0x0
#3 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 2 (Thread 0x7f933e859700 (LWP 25244)):
#0 0x00007f933ed4a80d in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f933ed4a6a4 in sleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f9341506299 in pool_sweeper (arg=0x0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/mem-pool.c>:446
state = {death_row = {next = 0x0, prev = 0x0}, cold_lists = {0x0 <repeats 1024 times>}, n_cold_lists = 0}
pool_list = 0x0
next_pl = 0x0
pt_pool = 0x0
i = 0
poisoned = false
#3 0x00007f933f4bde65 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f933ed8388d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 1 (Thread 0x7f93419ef4c0 (LWP 25243)):
#0 0x00007f933ecbb337 in raise () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f933ecbca28 in abort () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f933ecb4156 in __assert_fail_base () from /lib64/libc.so.6
No symbol table info available.
#3 0x00007f933ecb4202 in __assert_fail () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007f93304a3213 in afr_update_heal_status (this=0x7f932400d6c0, replies=0x7ffef71c1f50, index_vgfid=0x407043 "glusterfs.xattrop_dirty_gfid", esh=0x7ffef71c273d, dsh=0x7ffef71c273f, msh=0x7ffef71c273e) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/cluster/afr/src/afr-common.c>:6961
ret = 0
i = 2
io_domain_lk_count = 1
shd_domain_lk_count = 1
priv = 0x7f9324042980
key1 = 0x7ffef71c1dd0 "glusterfs.inodelk-dom-prefix:patchy-replicate-0"
key2 = 0x7ffef71c1d80 "glusterfs.inodelk-dom-prefix:patchy-replicate-0:self-heal"
__PRETTY_FUNCTION__ = "afr_update_heal_status"
#5 0x00007f93304a370e in afr_lockless_inspect (frame=0xaa40b8, this=0x7f932400d6c0, gfid=0xaa54e8 "\036nV\036~\223B\f\277\244\031\020\024\231\370:", inode=0x7ffef71c2830, index_vgfid=0x407043 "glusterfs.xattrop_dirty_gfid", entry_selfheal=0x7ffef71c283d, data_selfheal=0x7ffef71c283f, metadata_selfheal=0x7ffef71c283e, pending=0x7ffef71c283c "\001") at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/cluster/afr/src/afr-common.c>:7032
ret = 0
i = 2
priv = 0x7f9324042980
replies = 0x7ffef71c1f50
dsh = false
msh = true
esh = false
sources = 0x7ffef71c1f30 "\001"
sinks = 0x7ffef71c1f10 ""
valid_on = 0x7ffef71c1ed0 "\001\001\034\367\376\177"
witness = 0x7ffef71c1ef0
#6 0x00007f93304a38c0 in afr_get_heal_info (frame=0xaab338, this=0x7f932400d6c0, loc=0xaa54c8) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/cluster/afr/src/afr-common.c>:7075
data_selfheal = false
metadata_selfheal = false
entry_selfheal = false
pending = 1 '\001'
dict = 0x0
ret = -1
op_errno = 12
inode = 0xaa5388
substr = 0x0
status = 0x0
heal_frame = 0xaa40b8
heal_local = 0xaad598
local = 0xaa5498
index_vgfid = 0x407043 "glusterfs.xattrop_dirty_gfid"
__FUNCTION__ = "afr_get_heal_info"
#7 0x00007f933042a464 in afr_handle_heal_xattrs (frame=0xaab338, this=0x7f932400d6c0, loc=0xaa54c8, heal_op=0x406fe2 "glusterfs.heal-info") at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/cluster/afr/src/afr-inode-read.c>:1486
ret = -1
data = 0x0
__FUNCTION__ = "afr_handle_heal_xattrs"
#8 0x00007f933042acd9 in afr_getxattr (frame=0xaab338, this=0x7f932400d6c0, loc=0x7ffef71c3870, name=0x406fe2 "glusterfs.heal-info", xdata=0xaa41c8) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/cluster/afr/src/afr-inode-read.c>:1576
priv = 0x7f9324042980
local = 0xaa5498
children = 0x7f9324042d10
i = 0
op_errno = 0
ret = -1
cbk = 0x0
__FUNCTION__ = "afr_getxattr"
#9 0x00007f9341524be8 in syncop_getxattr (subvol=0x7f932400d6c0, loc=0x7ffef71c3870, dict=0x7ffef71c38b8, key=0x406fe2 "glusterfs.heal-info", xdata_in=0xaa41c8, xdata_out=0x0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:1574
_new = 0xaab338
old_THIS = 0x7f932400d6c0
next_xl_fn = 0x7f933042aa69 <afr_getxattr>
tmp_cbk = 0x7f9341524005 <syncop_getxattr_cbk>
task = 0x0
frame = 0xaa5068
args = {op_ret = 0, op_errno = 0, iatt1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt2 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt3 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, xattr = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector = 0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, lease = {cmd = 0, lease_type = NONE, lease_id = '\000' <repeats 15 times>, lease_flags = 0}, dict_out = 0x0, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0, lock_dict = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, barrier = {initialized = false, guard = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0, waitfor = 0}, task = 0x0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, done = 0, entries = {{list = {next = 0x0, prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, dict = 0x0, inode = 0x0, d_name = 0x7ffef71c33a0 ""}, offset = 0, locklist = {list = {next = 0x0, prev = 0x0}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, client_uid = 0x0, lk_flags = 0}}
__FUNCTION__ = "syncop_getxattr"
#10 0x00000000004045c6 in ?? ()
No symbol table info available.
#11 0x0000000000000000 in ?? ()
No symbol table info available.
=========================================================
Finish backtrace
program name : /build/install/sbin/glfsheal
corefile : /glfsheal-25243.core
=========================================================
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glfsheal-25243.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glfsheal-25243.core -q -ex 'set pagination off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ sort /build/install/cores/liblist.txt
+ uniq
+ cat /build/install/cores/liblist.txt.tmp
+ grep -v /build/install
+ tar -cf /archives/archived_builds/build-install-regression-test-with-multiplex-1569.tar /build/install/sbin /build/install/bin /build/install/lib /build/install/libexec /build/install/cores
tar: Removing leading `/' from member names
+ tar -rhf /archives/archived_builds/build-install-regression-test-with-multiplex-1569.tar -T /build/install/cores/liblist.txt
tar: Removing leading `/' from member names
+ bzip2 /archives/archived_builds/build-install-regression-test-with-multiplex-1569.tar
+ rm -f /build/install/cores/liblist.txt
+ rm -f /build/install/cores/liblist.txt.tmp
+ find /archives -size +1G -delete -type f
+ [[ builder210.int.aws.gluster.org == *\a\w\s* ]]
+ scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i **** /archives/archived_builds/build-install-regression-test-with-multiplex-1569.tar.bz2 _logs-collector at logs.aws.gluster.org:/var/www/glusterfs-logs/regression-test-with-multiplex-1569.bz2
Warning: Permanently added 'logs.aws.gluster.org,18.219.45.211' (ECDSA) to the list of known hosts.
+ echo 'Cores and builds archived in https://logs.aws.gluster.org/regression-test-with-multiplex-1569.bz2'
Cores and builds archived in https://logs.aws.gluster.org/regression-test-with-multiplex-1569.bz2
+ echo 'Open core using the following command to get a proper stack'
Open core using the following command to get a proper stack
+ echo 'Example: From root of extracted tarball'
Example: From root of extracted tarball
+ echo '\t\tgdb -ex '\''set sysroot ./'\'' -ex '\''core-file ./build/install/cores/xxx.core'\'' <target, say ./build/install/sbin/glusterd>'
\t\tgdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core' <target, say ./build/install/sbin/glusterd>
+ RET=1
+ '[' 1 -ne 0 ']'
+ tar -czf <https://build.gluster.org/job/regression-test-with-multiplex/1569/artifact/glusterfs-logs.tgz> /var/log/glusterfs /var/log/messages /var/log/messages-20191103 /var/log/messages-20191110 /var/log/messages-20191117 /var/log/messages-20191124
tar: Removing leading `/' from member names
+ case $(uname -s) in
++ uname -s
+ /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
kernel.core_pattern = /%e-%p.core
+ exit 1
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list