[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1477
jenkins at build.gluster.org
jenkins at build.gluster.org
Wed Aug 28 18:00:10 UTC 2019
See <https://build.gluster.org/job/regression-test-with-multiplex/1477/display/redirect?page=changes>
Changes:
[Amar Tumballi] posix/ctime: Fix race during lookup ctime xattr heal
[Amar Tumballi] Multiple files: get trivial stuff done before lock
[Amar Tumballi] lcov: check for zerofill/discard fops on arbiter
[Amar Tumballi] gfapi: Fix deadlock while processing upcall
[Amar Tumballi] geo-rep: Fix mount broker setup issue
[Amar Tumballi] locks/fencing: Address hang while lock preemption
[atin] tests: introduce BRICK_MUX_BAD_TESTS variable
[Amar Tumballi] storage/posix: set the op_errno to proper errno during gfid set
[Amar Tumballi] xdr: add code so we have more xdr functions covered
[atin] multiple files: reduce minor work under RCU_READ_LOCK
[atin] tests/shd: Break down shd mux tests into multiple .t file
[Amar Tumballi] glusterd/shd: Return null proc if process is not running.
[Amar Tumballi] graph/shd: attach volfile even if ctx->active is NULL
[Kotresh H R] features/utime: always update ctime at setattr
[Nithya Balachandran] cluster/dht: Log hashes in hex
[Amar Tumballi] rpc/transport: have default listen-port
[Amar Tumballi] geo-rep: Fix Config Get Race
[Pranith Kumar K] cluster/ec: Update lock->good_mask on parent fop failure
[Amar Tumballi] build: stop suppressing "Entering/Leaving direcory..." messages
[Amar Tumballi] fuse: rate limit reading from fuse device upon receiving EPERM
[Ravishankar N] tests: fix bug-880898.t crash
[Krutika Dhananjay] features/shard: Send correct size when reads are sent beyond file size
[Amar Tumballi] gfapi: provide version for statedump path
[Amar Tumballi] cluster/ec: Fix coverity issue.
[Amar Tumballi] fuse: Set limit on invalidate queue size
[Amar Tumballi] glusterd: create separate logdirs for cluster.rc instances
[Amar Tumballi] posix: don't expect timer wheel to be inited
[Amar Tumballi] client-handshake.c: minor changes and removal of dead code.
[Pranith Kumar K] afr: restore timestamp of parent dir during entry-heal
[Amar Tumballi] mount.glusterfs: make fcache-keep-open option take a value
[Amar Tumballi] libglusterfs: remove dependency of rpc
[Mohit Agrawal] rpc: glusterd start is failed and throwing an error Address already in
[atin] tests: mark
[Nithya Balachandran] tests/dht: Add a test file for file renames
[Mohit Agrawal] glusterd: ./tests/bugs/glusterd/bug-1595320.t is failing
[Amar Tumballi] storage/posix - Moved pointed validity check in order to avoid possible
[Amar Tumballi] client_t.c: removal of dead code.
[Amar Tumballi] protocol/client - fixing a coverity issue
[Pranith Kumar K] posix: In brick_mux brick is crashed while start/stop volume in loop
[Amar Tumballi] geo-rep: Fix worker connection issue
[Amar Tumballi] mount/fuse - Fixing a coverity issue
[Amar Tumballi] logging: Structured logging reference PR
[Amar Tumballi] libglusterfs - fixing a coverity issue
[Amar Tumballi] features/locks: avoid use after freed of frame for blocked lock
[Amar Tumballi] performance/md-cache: Do not skip caching of null character xattr values
[Amar Tumballi] api: fixing a coverity issue
[Amar Tumballi] ctime: Fix ctime issue with utime family of syscalls
[Amar Tumballi] storage/posix - fixing a coverity issue
[Barak Sason] features/cloudsync - fix a coverity issue
[Amar Tumballi] gluster-smb:add smb parameter when access gluster by cifs
[Amar Tumballi] geo-rep: Structured logging new format
[Amar Tumballi] features/utime - fixing a coverity issue
[Amar Tumballi] storage/posix - Fixing a coverity issue
[Amar Tumballi] nlm: check if nlm4 is initialized in nlm_priv
[Amar Tumballi] ctime: Fix incorrect realtime passed to frame->root->ctime
[Aravinda VK] geo-rep: Fix the name of changelog archive file
[Mohit Agrawal] posix: log aio_error return codes in posix_fs_health_check
[Amar Tumballi] Revert "packaging: (ganesha) remove glusterfs-ganesha subpackage and
[Amar Tumballi] Revert "glusterd: (storhaug) remove ganesha (843e1b0)"
[Ravishankar N] cluster/afr - Unused variables
[atin] cli - group files to set volume options supports comments
[atin] glusterd: Add warning and abort in case of failures in migration during
[atin] glusterd: stop stale bricks during handshaking in brick mux mode
[Sanju Rakonde] glusterd: Unused value coverity fix
[Amar Tumballi] build: fix rpmlint warnings in specfile
[atin] glusterd: Fixed incorrect size argument
------------------------------------------
[...truncated 3.59 MB...]
tv = {tv_sec = 0, tv_usec = 960988}
base = 0x11753e0
#2 0x00007f183555ddd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f1834e2502d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 6 (Thread 0x7f1827d17700 (LWP 32070)):
#0 0x00007f1835561965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f182a8e668e in hooks_worker (args=0x117bd90) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-hooks.c>:527
conf = 0x11ed670
hooks_priv = 0x1233840
stub = 0x7f181c004040
#2 0x00007f183555ddd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f1834e2502d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 5 (Thread 0x7f1827516700 (LWP 32071)):
#0 0x00007f1834e25603 in epoll_wait () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f18367ae2a7 in event_dispatch_epoll_worker (data=0x120ac50) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/event-epoll.c>:745
event = {events = 1, data = {ptr = 0x10000000a, fd = 10, u32 = 10, u64 = 4294967306}}
ret = 0
ev_data = 0x120ac50
event_pool = 0x1168e50
myindex = 1
timetodie = 0
gen = 0
poller_death_notify = {next = 0x0, prev = 0x0}
slot = 0x0
tmp = 0x0
__FUNCTION__ = "event_dispatch_epoll_worker"
#2 0x00007f183555ddd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f1834e2502d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 4 (Thread 0x7f182ccfa700 (LWP 32041)):
#0 0x00007f1834debfad in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f1834debe44 in sleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f183676be5c in pool_sweeper (arg=0x0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/mem-pool.c>:446
state = {death_row = {next = 0x0, prev = 0x0}, cold_lists = {0x0 <repeats 1024 times>}, n_cold_lists = 0}
pool_list = 0x0
next_pl = 0x0
pt_pool = 0x0
i = 0
poisoned = false
#3 0x00007f183555ddd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f1834e2502d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 3 (Thread 0x7f182d4fb700 (LWP 32040)):
#0 0x00007f1835565361 in sigwait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x000000000040b2a7 in ?? ()
No symbol table info available.
#2 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 2 (Thread 0x7f182dcfc700 (LWP 32039)):
#0 0x00007f1835561d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f18367440f8 in gf_timer_proc (data=0x11709c0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/timer.c>:140
now = {tv_sec = 739894, tv_nsec = 612456954}
reg = 0x11709c0
event = 0x7f181c0052e0
tmp = 0x0
old_THIS = 0x7f1836a47a60 <global_xlator>
#2 0x00007f183555ddd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3 0x00007f1834e2502d in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 1 (Thread 0x7f182c4f9700 (LWP 32042)):
#0 0x00007f182a83d807 in cds_list_add_tail (newp=0x7f1820019758, head=0x18) at /usr/include/urcu/list.h:66
No locals.
#1 0x00007f182a844bf5 in glusterd_brick_process_add_brick (brickinfo=0x7f18200196c0, parent_brickinfo=0x12271c0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-utils.c>:2427
ret = 0
this = 0x117bd90
priv = 0x11ed670
brick_proc = 0x0
__FUNCTION__ = "glusterd_brick_process_add_brick"
#2 0x00007f182a850c89 in attach_brick (this=0x117bd90, brickinfo=0x7f18200196c0, other_brick=0x12271c0, volinfo=0x1212480, other_vol=0x1212480) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-utils.c>:5999
conf = 0x11ed670
pidfile1 = "/var/run/gluster/vols/patchy/builder202.int.aws.gluster.org-d-backends-patchy1.pid", '\000' <repeats 4013 times>
pidfile2 = "/var/run/gluster/vols/patchy/builder202.int.aws.gluster.org-d-backends-patchy3.pid", '\000' <repeats 4013 times>
unslashed = "d-backends-patchy3", '\000' <repeats 4077 times>
full_id = "patchy.builder202.int.aws.gluster.org.d-backends-patchy3", '\000' <repeats 4039 times>
path = "/var/lib/glusterd/vols/patchy/patchy.builder202.int.aws.gluster.org.d-backends-patchy3.vol", '\000' <repeats 4005 times>
ret = 0
tries = 15
rpc = 0x7f1820003650
len = 56
__FUNCTION__ = "attach_brick"
#3 0x00007f182a8528dd in glusterd_brick_start (volinfo=0x1212480, brickinfo=0x7f18200196c0, wait=true, only_connect=false) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-utils.c>:6573
ret = 0
this = 0x117bd90
other_brick = 0x12271c0
conf = 0x11ed670
pid = -1
pidfile = "/var/run/gluster/vols/patchy/builder202.int.aws.gluster.org-d-backends-patchy3.pid", '\000' <repeats 4013 times>
socketpath = '\000' <repeats 4095 times>
brickpath = 0x0
other_vol = 0x1212480
is_service_running = false
volid = "R\316=\232\205\223Cл\213\317g\341+\203", <incomplete sequence \367>
size = 16
__PRETTY_FUNCTION__ = "glusterd_brick_start"
__FUNCTION__ = "glusterd_brick_start"
#4 0x00007f182a8d8566 in glusterd_op_perform_add_bricks (volinfo=0x1212480, count=1, bricks=0x7f18200022b0 " builder202.int.aws.gluster.org:/d/backends/patchy3 ", dict=0x7f18200129d8) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-brick-ops.c>:1184
brick = 0x7f1820031f21 "builder202.int.aws.gluster.org:/d/backends/patchy3"
i = 1
brick_list = 0x7f1820031f20 " builder202.int.aws.gluster.org:/d/backends/patchy3"
free_ptr1 = 0x7f1820015520 " builder202.int.aws.gluster.org:/d/backends/patchy3"
free_ptr2 = 0x7f1820031f20 " builder202.int.aws.gluster.org:/d/backends/patchy3"
saveptr = 0x7f1820031f54 ""
ret = 0
stripe_count = 0
replica_count = 3
arbiter_count = 0
type = 0
brickinfo = 0x7f18200196c0
param = {rsp_dict = 0x0, volinfo = 0x0, node = 0x0}
restart_needed = false
brickid = 3
key = "brick1.mount_dir", '\000' <repeats 47 times>
brick_mount_dir = 0x7f1820012e90 "/backends/patchy3"
this = 0x117bd90
conf = 0x11ed670
is_valid_add_brick = true
brickstat = {f_bsize = 4096, f_frsize = 4096, f_blocks = 2618880, f_bfree = 2559155, f_bavail = 2559155, f_files = 5242880, f_ffree = 5240082, f_favail = 5240082, f_fsid = 1792, f_flag = 4096, f_namemax = 255, __f_spare = {0, 0, 0, 0, 0, 0}}
__PRETTY_FUNCTION__ = "glusterd_op_perform_add_bricks"
__FUNCTION__ = "glusterd_op_perform_add_bricks"
#5 0x00007f182a8daf95 in glusterd_op_add_brick (dict=0x7f18200129d8, op_errstr=0x7f1818206b30) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-brick-ops.c>:2059
ret = 0
volname = 0x7f1820017100 "patchy"
priv = 0x11ed670
volinfo = 0x1212480
this = 0x117bd90
bricks = 0x7f18200022b0 " builder202.int.aws.gluster.org:/d/backends/patchy3 "
count = 1
__PRETTY_FUNCTION__ = "glusterd_op_add_brick"
__FUNCTION__ = "glusterd_op_add_brick"
#6 0x00007f182a91378e in gd_mgmt_v3_commit_fn (op=GD_OP_ADD_BRICK, dict=0x7f18200129d8, op_errstr=0x7f1818206b30, op_errno=0x7f1818206b28, rsp_dict=0x7f182000f948) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-mgmt.c>:314
ret = -1
this = 0x117bd90
__PRETTY_FUNCTION__ = "gd_mgmt_v3_commit_fn"
__FUNCTION__ = "gd_mgmt_v3_commit_fn"
#7 0x00007f182a9172a0 in glusterd_mgmt_v3_commit (op=GD_OP_ADD_BRICK, op_ctx=0x7f1820016ad8, req_dict=0x7f18200129d8, op_errstr=0x7f1818206b30, op_errno=0x7f1818206b28, txn_generation=0) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-mgmt.c>:1576
ret = -1
peer_cnt = 0
rsp_dict = 0x7f182000f948
peerinfo = 0x0
args = {op_ret = 0, op_errno = 0, iatt1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt2 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt3 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, xattr = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector = 0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, lease = {cmd = 0, lease_type = NONE, lease_id = '\000' <repeats 15 times>, lease_flags = 0}, dict_out = 0x0, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0, lock_dict = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, barrier = {initialized = false, guard = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0, waitfor = 0}, task = 0x0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>, __align = 0}, done = 0, entries = {{list = {next = 0x0, prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, dict = 0x0, inode = 0x0, d_name = 0x7f1818206650 ""}, offset = 0, locklist = {list = {next = 0x0, prev = 0x0}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, client_uid = 0x0, lk_flags = 0}}
peer_uuid = '\000' <repeats 15 times>
this = 0x117bd90
conf = 0x11ed670
__PRETTY_FUNCTION__ = "glusterd_mgmt_v3_commit"
__FUNCTION__ = "glusterd_mgmt_v3_commit"
#8 0x00007f182a919675 in glusterd_mgmt_v3_initiate_all_phases (req=0x7f1818001b68, op=GD_OP_ADD_BRICK, dict=0x7f1820016ad8) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-mgmt.c>:2332
ret = 0
op_ret = -1
req_dict = 0x7f18200129d8
tmp_dict = 0x7f1820014a48
conf = 0x11ed670
op_errstr = 0x7f18200149e0 "/d/backends/patchy3 is already part of a volume"
this = 0x117bd90
is_acquired = true
originator_uuid = 0x7f1820014850
txn_generation = 0
op_errno = 0
__PRETTY_FUNCTION__ = "glusterd_mgmt_v3_initiate_all_phases"
__FUNCTION__ = "glusterd_mgmt_v3_initiate_all_phases"
#9 0x00007f182a8d5f3b in __glusterd_handle_add_brick (req=0x7f1818001b68) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-brick-ops.c>:443
ret = 0
cli_req = {dict = {dict_len = 252, dict_val = 0x7f1820014530 ""}}
dict = 0x7f1820016ad8
bricks = 0x7f18200022b0 " builder202.int.aws.gluster.org:/d/backends/patchy3 "
volname = 0x7f1820017100 "patchy"
brick_count = 1
cli_rsp = 0x0
err_str = '\000' <repeats 2047 times>
rsp = {op_ret = 0, op_errno = 0, op_errstr = 0x0, dict = {dict_len = 0, dict_val = 0x0}}
volinfo = 0x1212480
this = 0x117bd90
total_bricks = 3
replica_count = 3
arbiter_count = 0
stripe_count = 0
type = 0
conf = 0x11ed670
__PRETTY_FUNCTION__ = "__glusterd_handle_add_brick"
__FUNCTION__ = "__glusterd_handle_add_brick"
#10 0x00007f182a80d075 in glusterd_big_locked_handler (req=0x7f1818001b68, actor_fn=0x7f182a8d537c <__glusterd_handle_add_brick>) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-handler.c>:79
priv = 0x11ed670
ret = -1
#11 0x00007f182a8d601d in glusterd_handle_add_brick (req=0x7f1818001b68) at <https://build.gluster.org/job/regression-test-with-multiplex/ws/xlators/mgmt/glusterd/src/glusterd-brick-ops.c>:467
No locals.
#12 0x00007f183678444b in synctask_wrap () at <https://build.gluster.org/job/regression-test-with-multiplex/ws/libglusterfs/src/syncop.c>:272
task = 0x7f1818005210
#13 0x00007f1834d6f0d0 in ?? () from /lib64/libc.so.6
No symbol table info available.
#14 0x0000000000000000 in ?? ()
No symbol table info available.
=========================================================
Finish backtrace
program name : /build/install/sbin/glusterd
corefile : /glfs_sproc0-32038.core
=========================================================
+ rm -f /build/install/cores/gdbout.txt
+ sort /build/install/cores/liblist.txt
+ uniq
+ cat /build/install/cores/liblist.txt.tmp
+ grep -v /build/install
+ tar -cf /archives/archived_builds/build-install-regression-test-with-multiplex-1477.tar /build/install/sbin /build/install/bin /build/install/lib /build/install/libexec /build/install/cores
tar: Removing leading `/' from member names
+ tar -rhf /archives/archived_builds/build-install-regression-test-with-multiplex-1477.tar -T /build/install/cores/liblist.txt
tar: Removing leading `/' from member names
+ bzip2 /archives/archived_builds/build-install-regression-test-with-multiplex-1477.tar
+ rm -f /build/install/cores/liblist.txt
+ rm -f /build/install/cores/liblist.txt.tmp
+ find /archives -size +1G -delete -type f
+ [[ builder202.int.aws.gluster.org == *\a\w\s* ]]
+ scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i **** /archives/archived_builds/build-install-regression-test-with-multiplex-1477.tar.bz2 _logs-collector at logs.aws.gluster.org:/var/www/glusterfs-logs/regression-test-with-multiplex-1477.bz2
Warning: Permanently added 'logs.aws.gluster.org,18.219.45.211' (ECDSA) to the list of known hosts.
+ echo 'Cores and builds archived in https://logs.aws.gluster.org/regression-test-with-multiplex-1477.bz2'
Cores and builds archived in https://logs.aws.gluster.org/regression-test-with-multiplex-1477.bz2
+ echo 'Open core using the following command to get a proper stack'
Open core using the following command to get a proper stack
+ echo 'Example: From root of extracted tarball'
Example: From root of extracted tarball
+ echo '\t\tgdb -ex '\''set sysroot ./'\'' -ex '\''core-file ./build/install/cores/xxx.core'\'' <target, say ./build/install/sbin/glusterd>'
\t\tgdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core' <target, say ./build/install/sbin/glusterd>
+ RET=1
+ '[' 1 -ne 0 ']'
+ tar -czf <https://build.gluster.org/job/regression-test-with-multiplex/1477/artifact/glusterfs-logs.tgz> /var/log/glusterfs /var/log/messages
tar: Removing leading `/' from member names
+ case $(uname -s) in
++ uname -s
+ /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
kernel.core_pattern = /%e-%p.core
+ exit 1
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list