[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #786

Mohit Agrawal moagrawa at redhat.com
Sun Jul 1 04:45:03 UTC 2018


I have posted a patch (https://review.gluster.org/#/c/20427/) to resolve
the same.

Regards
Mohit Agrawal

On Sat, Jun 30, 2018 at 10:05 AM, Atin Mukherjee <amukherj at redhat.com>
wrote:

> +Mohit
>
> Is this a new crash? I’ve not seen multiplex regressions dumping core in
> recent times.
>
> On Sat, 30 Jun 2018 at 00:25, <jenkins at build.gluster.org> wrote:
>
>> See <https://build.gluster.org/job/regression-test-with-
>> multiplex/786/display/redirect?page=changes>
>>
>> Changes:
>>
>> [Varsha Rao] xlators/features/barrier: Fix RESOURCE_LEAK
>>
>> [Niels de Vos] extras/group : add database workload profile
>>
>> [Amar Tumballi] xlators/meta: Fix resource_leak
>>
>> [Raghavendra G] cluster/dht: Do not try to use up the readdirp buffer
>>
>> ------------------------------------------
>> [...truncated 2.63 MB...]
>>         arguments = {{gp_offset = 0, fp_offset = 0, overflow_arg_area =
>> 0x7f9a3247eb60, reg_save_area = 0x7f9a18144a18}}
>>         msg = 0x0
>>         ctx = 0xe1e010
>>         host = 0x0
>>         hints = {ai_flags = 0, ai_family = 0, ai_socktype = 0,
>> ai_protocol = 0, ai_addrlen = 0, ai_addr = 0x0, ai_canonname = 0x0, ai_next
>> = 0x0}
>>         result = 0x0
>> #12 0x00007f9a3241a394 in server_rpc_notify (rpc=0x7f9a340449a0,
>> xl=0x7f9a3402fd40, event=RPCSVC_EVENT_DISCONNECT, data=0x7f9a1ebd99c0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/xlators/protocol/server/src/server.c>:511
>>         detached = true
>>         this = 0x7f9a3402fd40
>>         trans = 0x7f9a1ebd99c0
>>         conf = 0x7f9a34037320
>>         client = 0xff6830
>>         auth_path = 0x7f9a34032210 "/d/backends/vol01/brick0"
>>         ret = 0
>>         victim_found = false
>>         xlator_name = 0x0
>>         ctx = 0xe1e010
>>         top = 0x0
>>         trav_p = 0x0
>>         travxl = 0x0
>>         xprtrefcount = 0
>>         tmp = 0x0
>>         __FUNCTION__ = "server_rpc_notify"
>> #13 0x00007f9a46ff597f in rpcsvc_handle_disconnect (svc=0x7f9a340449a0,
>> trans=0x7f9a1ebd99c0) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/rpc/rpc-lib/src/rpcsvc.c>:772
>>         event = RPCSVC_EVENT_DISCONNECT
>>         wrappers = 0x7f9a18da62c0
>>         wrapper = 0x7f9a34044a30
>>         ret = -1
>>         i = 0
>>         wrapper_count = 1
>>         listener = 0x0
>> #14 0x00007f9a46ff5afc in rpcsvc_notify (trans=0x7f9a1ebd99c0,
>> mydata=0x7f9a340449a0, event=RPC_TRANSPORT_DISCONNECT,
>> data=0x7f9a1ebd99c0) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/rpc/rpc-lib/src/rpcsvc.c>:810
>>         ret = -1
>>         msg = 0x0
>>         new_trans = 0x0
>>         svc = 0x7f9a340449a0
>>         listener = 0x0
>>         __FUNCTION__ = "rpcsvc_notify"
>> #15 0x00007f9a46ffb74b in rpc_transport_notify (this=0x7f9a1ebd99c0,
>> event=RPC_TRANSPORT_DISCONNECT, data=0x7f9a1ebd99c0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/rpc/rpc-lib/src/rpc-transport.c>:537
>>         ret = -1
>>         __FUNCTION__ = "rpc_transport_notify"
>> #16 0x00007f9a3be07ffb in socket_event_poll_err (this=0x7f9a1ebd99c0,
>> gen=1, idx=140) at <https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/rpc/rpc-transport/socket/src/socket.c>:1209
>>         priv = 0x7f9a1ebd9f20
>>         socket_closed = true
>>         __FUNCTION__ = "socket_event_poll_err"
>> #17 0x00007f9a3be0d5ad in socket_event_handler (fd=372, idx=140, gen=1,
>> data=0x7f9a1ebd99c0, poll_in=1, poll_out=0, poll_err=0) at <
>> https://build.gluster.org/job/regression-test-with-multiplex/ws/rpc/rpc-
>> transport/socket/src/socket.c>:2627
>>         this = 0x7f9a1ebd99c0
>>         priv = 0x7f9a1ebd9f20
>>         ret = -1
>>         ctx = 0xe1e010
>>         socket_closed = false
>>         notify_handled = true
>>         __FUNCTION__ = "socket_event_handler"
>> #18 0x00007f9a472b1834 in event_dispatch_epoll_handler
>> (event_pool=0xe55c30, event=0x7f9713f0aea0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/libglusterfs/src/event-epoll.c>:587
>>         ev_data = 0x7f9713f0aea4
>>         slot = 0xe8a340
>>         handler = 0x7f9a3be0d278 <socket_event_handler>
>>         data = 0x7f9a1ebd99c0
>>         idx = 140
>>         gen = 1
>>         ret = -1
>>         fd = 372
>>         handled_error_previously = false
>>         __FUNCTION__ = "event_dispatch_epoll_handler"
>> #19 0x00007f9a472b1b27 in event_dispatch_epoll_worker
>> (data=0x7f972d6cb8d0) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/libglusterfs/src/event-epoll.c>:663
>>         event = {events = 1, data = {ptr = 0x10000008c, fd = 140, u32 =
>> 140, u64 = 4294967436}}
>>         ret = 1
>>         ev_data = 0x7f972d6cb8d0
>>         event_pool = 0xe55c30
>>         myindex = 106
>>         timetodie = 0
>>         __FUNCTION__ = "event_dispatch_epoll_worker"
>> #20 0x00007f9a4628ce25 in start_thread () from /lib64/libpthread.so.0
>> No symbol table info available.
>> #21 0x00007f9a45951bad in clone () from /lib64/libc.so.6
>> No symbol table info available.
>>
>> Thread 1 (Thread 0x7f97caf1a700 (LWP 24553)):
>> #0  0x00007f9a32b037fc in quota_lookup (frame=0x7f9a2c02f778,
>> this=0x7f99f45e3ad0, loc=0x7f97caf198d0, xattr_req=0x0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/xlators/features/quota/src/quota.c>:1663
>>         priv = 0x0
>>         ret = -1
>>         local = 0x0
>>         __FUNCTION__ = "quota_lookup"
>> #1  0x00007f9a328df23b in io_stats_lookup (frame=0x7f9a2c004048,
>> this=0x7f99f45e52c0, loc=0x7f97caf198d0, xdata=0x0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/xlators/debug/io-stats/src/io-stats.c>:2784
>>         _new = 0x7f9a2c02f778
>>         old_THIS = 0x7f99f45e52c0
>>         next_xl_fn = 0x7f9a32b037a7 <quota_lookup>
>>         tmp_cbk = 0x7f9a328d30f2 <io_stats_lookup_cbk>
>>         __FUNCTION__ = "io_stats_lookup"
>> #2  0x00007f9a4730d76d in default_lookup (frame=0x7f9a2c004048,
>> this=0x7f99f45e6e20, loc=0x7f97caf198d0, xdata=0x0) at defaults.c:2714
>>         old_THIS = 0x7f99f45e6e20
>>         next_xl = 0x7f99f45e52c0
>>         next_xl_fn = 0x7f9a328dee1e <io_stats_lookup>
>>         opn = 27
>>         __FUNCTION__ = "default_lookup"
>> #3  0x00007f9a47289c41 in syncop_lookup (subvol=0x7f99f45e6e20,
>> loc=0x7f97caf198d0, iatt=0x7f97caf19830, parent=0x0, xdata_in=0x0,
>> xdata_out=0x0) at <https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/libglusterfs/src/syncop.c>:1260
>>         _new = 0x7f9a2c004048
>>         old_THIS = 0x7f9a3402fd40
>>         next_xl_fn = 0x7f9a4730d57f <default_lookup>
>>         tmp_cbk = 0x7f9a47289683 <syncop_lookup_cbk>
>>         task = 0x0
>>         frame = 0x7f9a2c00a4a8
>>         args = {op_ret = 0, op_errno = 0, iatt1 = {ia_flags = 0, ia_ino =
>> 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid =
>> 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0,
>> ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0,
>> ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid =
>> '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000',
>> sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0
>> '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec
>> = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0
>> '\000'}}}, iatt2 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0,
>> ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0,
>> ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0,
>> ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0,
>> ia_attributes = 0, ia_attributes_mask = 0, ia_gf
>>  id = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0
>> '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write
>> = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000',
>> exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0
>> '\000'}}}, xattr = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks
>> = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0,
>> f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}},
>> vector = 0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock =
>> {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner =
>> {len = 0, data = '\000' <repeats 1023 times>}}, lease = {cmd = 0,
>> lease_type = NONE, lease_id = '\000' <repeats 15 times>, lease_flags = 0},
>> dict_out = 0x0, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0,
>> lock_dict = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0,
>> __kind = 0, __spins = 0, __elision = 0,
>>   __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39
>> times>, __align = 0}, barrier = {initialized = false, guard = {__data =
>> {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
>> 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000'
>> <repeats 39 times>, __align = 0}, cond = {__data = {__lock = 0, __futex =
>> 0, __total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0,
>> __nwaiters = 0, __broadcast_seq = 0}, __size = '\000' <repeats 47 times>,
>> __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0, waitfor = 0},
>> task = 0x0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0,
>> __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev =
>> 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond
>> = {__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0,
>> __woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0},
>> __size = '\000' <repeats 47 times>, __align = 0
>>  }, done = 0, entries = {{list = {next = 0x0, prev = 0x0}, {next = 0x0,
>> prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, d_type = 0, d_stat =
>> {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink =
>> 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0,
>> ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec
>> = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0,
>> ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type =
>> IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000',
>> owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read
>> = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000',
>> write = 0 '\000', exec = 0 '\000'}}}, dict = 0x0, inode = 0x0, d_name =
>> 0x7f97caf19358 ""}, offset = 0, locklist = {list = {next = 0x0, prev =
>> 0x0}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0,
>> l_owner = {len = 0, data = '\000' <repeats 1023 tim
>>  es>}}, client_uid = 0x0, lk_flags = 0}}
>>         __FUNCTION__ = "syncop_lookup"
>> #4  0x00007f9a3245b582 in server_first_lookup (this=0x7f9a3402fd40,
>> client=0x7f9a2d455980, reply=0x7f9a2c019198) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/xlators/protocol/server/src/server-handshake.c>:382
>>         loc = {path = 0x7f9a32482ae5 "/", name = 0x7f9a32482bea "", inode
>> = 0x7f975994e518, parent = 0x0, gfid = '\000' <repeats 15 times>, "\001",
>> pargfid = '\000' <repeats 15 times>}
>>         iatt = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0,
>> ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0,
>> ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0,
>> ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0,
>> ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15
>> times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000',
>> sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0
>> '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'},
>> other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
>>         dict = 0x0
>>         ret = 0
>>         xl = 0x7f99f45e6e20
>>         msg = 0x0
>>         inode = 0x0
>>         bname = 0x0
>>         str = 0x0
>>         tmp = 0x0
>>         saveptr = 0x0
>>         __FUNCTION__ = "server_first_lookup"
>> #5  0x00007f9a3245d0ed in server_setvolume (req=0x7f9a2c02ca68) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/xlators/protocol/server/src/server-handshake.c>:886
>>         args = {dict = {dict_len = 841, dict_val = 0x7f9a2d474090 ""}}
>>         rsp = 0x0
>>         client = 0x7f9a2d455980
>>         serv_ctx = 0x7f9a2d455d70
>>         conf = 0x7f9a34037320
>>         peerinfo = 0x7f9a2d451ad0
>>         reply = 0x7f9a2c019198
>>         config_params = 0x7f9a2c0101c8
>>         params = 0x7f9a2c010448
>>         name = 0x7f9a2d4c22f0 "/d/backends/vol02/brick3"
>>         client_uid = 0x7f9a2d4790e0 "CTX_ID:21c57b5f-40da-4b9b-
>> b8e4-a3400fc28deb-GRAPH_ID:0-PID:23286-HOST:builder104.
>> cloud.gluster.org-PC_NAME:patchy-vol02-client-3-RECON_NO:-0"
>>         clnt_version = 0x7f9a2d441360 "4.2dev"
>>         xl = 0x7f99f45e6e20
>>         msg = 0x0
>>         volfile_key = 0x7f9a2d47ca30 "gluster/glustershd"
>>         this = 0x7f9a3402fd40
>>         checksum = 0
>>         ret = 0
>>         op_ret = 0
>>         op_errno = 22
>>         buf = 0x0
>>         opversion = 40200
>>         xprt = 0x7f9a34036eb0
>>         fop_version = 1298437
>>         mgmt_version = 0
>>         ctx = 0xe1e010
>>         tmp = 0x7f99f4b92e50
>>         subdir_mount = 0x0
>>         client_name = 0x7f9a2d441560 "glustershd"
>>         cleanup_starting = false
>>         __FUNCTION__ = "server_setvolume"
>>         __PRETTY_FUNCTION__ = "server_setvolume"
>> #6  0x00007f9a46ff57e2 in rpcsvc_handle_rpc_call (svc=0x7f9a340449a0,
>> trans=0x7f9a2d451a10, msg=0x7f9a2d4ae6b0) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/rpc/rpc-lib/src/rpcsvc.c>:721
>>         actor = 0x7f9a326958c0 <gluster_handshake_actors+64>
>>         actor_fn = 0x7f9a3245b876 <server_setvolume>
>>         req = 0x7f9a2c02ca68
>>         ret = -1
>>         port = 48482
>>         is_unix = false
>>         empty = false
>>         unprivileged = true
>>         reply = 0x0
>>         drc = 0x0
>>         __FUNCTION__ = "rpcsvc_handle_rpc_call"
>> #7  0x00007f9a46ff5b35 in rpcsvc_notify (trans=0x7f9a2d451a10,
>> mydata=0x7f9a340449a0, event=RPC_TRANSPORT_MSG_RECEIVED,
>> data=0x7f9a2d4ae6b0) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/rpc/rpc-lib/src/rpcsvc.c>:815
>>         ret = -1
>>         msg = 0x7f9a2d4ae6b0
>>         new_trans = 0x0
>>         svc = 0x7f9a340449a0
>>         listener = 0x0
>>         __FUNCTION__ = "rpcsvc_notify"
>> #8  0x00007f9a46ffb74b in rpc_transport_notify (this=0x7f9a2d451a10,
>> event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f9a2d4ae6b0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/rpc/rpc-lib/src/rpc-transport.c>:537
>>         ret = -1
>>         __FUNCTION__ = "rpc_transport_notify"
>> #9  0x00007f9a3be0ced8 in socket_event_poll_in (this=0x7f9a2d451a10,
>> notify_handled=true) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/rpc/rpc-
>> transport/socket/src/socket.c>:2462
>>         ret = 0
>>         pollin = 0x7f9a2d4ae6b0
>>         priv = 0x7f9a2d473db0
>>         ctx = 0xe1e010
>> #10 0x00007f9a3be0d546 in socket_event_handler (fd=562, idx=291, gen=1,
>> data=0x7f9a2d451a10, poll_in=1, poll_out=0, poll_err=0) at <
>> https://build.gluster.org/job/regression-test-with-multiplex/ws/rpc/rpc-
>> transport/socket/src/socket.c>:2618
>>         this = 0x7f9a2d451a10
>>         priv = 0x7f9a2d473db0
>>         ret = 0
>>         ctx = 0xe1e010
>>         socket_closed = false
>>         notify_handled = false
>>         __FUNCTION__ = "socket_event_handler"
>> #11 0x00007f9a472b1834 in event_dispatch_epoll_handler
>> (event_pool=0xe55c30, event=0x7f97caf19ea0) at <
>> https://build.gluster.org/job/regression-test-with-
>> multiplex/ws/libglusterfs/src/event-epoll.c>:587
>>         ev_data = 0x7f97caf19ea4
>>         slot = 0xe8dbe0
>>         handler = 0x7f9a3be0d278 <socket_event_handler>
>>         data = 0x7f9a2d451a10
>>         idx = 291
>>         gen = 1
>>         ret = -1
>>         fd = 562
>>         handled_error_previously = false
>>         __FUNCTION__ = "event_dispatch_epoll_handler"
>> #12 0x00007f9a472b1b27 in event_dispatch_epoll_worker
>> (data=0x7f9a11cee770) at <https://build.gluster.org/
>> job/regression-test-with-multiplex/ws/libglusterfs/src/event-epoll.c>:663
>>         event = {events = 1, data = {ptr = 0x100000123, fd = 291, u32 =
>> 291, u64 = 4294967587}}
>>         ret = 1
>>         ev_data = 0x7f9a11cee770
>>         event_pool = 0xe55c30
>>         myindex = 82
>>         timetodie = 0
>>         __FUNCTION__ = "event_dispatch_epoll_worker"
>> #13 0x00007f9a4628ce25 in start_thread () from /lib64/libpthread.so.0
>> No symbol table info available.
>> #14 0x00007f9a45951bad in clone () from /lib64/libc.so.6
>> No symbol table info available.
>> =========================================================
>>               Finish backtrace
>>          program name : /build/install/sbin/glusterfsd
>>          corefile     : /glusterepoll81-23297.core
>> =========================================================
>>
>> + rm -f /build/install/cores/gdbout.txt
>> + sort /build/install/cores/liblist.txt
>> + uniq
>> + cat /build/install/cores/liblist.txt.tmp
>> + grep -v /build/install
>> + tar -cf /archives/archived_builds/build-install-regression-test-with-multiplex-786.tar
>> /build/install/sbin /build/install/bin /build/install/lib
>> /build/install/libexec /build/install/cores
>> tar: Removing leading `/' from member names
>> + tar -rhf /archives/archived_builds/build-install-regression-test-with-multiplex-786.tar
>> -T /build/install/cores/liblist.txt
>> tar: Removing leading `/' from member names
>> + bzip2 /archives/archived_builds/build-install-regression-test-
>> with-multiplex-786.tar
>> + rm -f /build/install/cores/liblist.txt
>> + rm -f /build/install/cores/liblist.txt.tmp
>> + find /archives -size +1G -delete -type f
>> + echo 'Cores and build archived in http://builder104.cloud.
>> gluster.org/archived_builds/build-install-regression-test-
>> with-multiplex-786.tar.bz2'
>> Cores and build archived in http://builder104.cloud.
>> gluster.org/archived_builds/build-install-regression-test-
>> with-multiplex-786.tar.bz2
>> + echo 'Open core using the following command to get a proper stack'
>> Open core using the following command to get a proper stack
>> + echo 'Example: From root of extracted tarball'
>> Example: From root of extracted tarball
>> + echo '\t\tgdb -ex '\''set sysroot ./'\'' -ex '\''core-file
>> ./build/install/cores/xxx.core'\'' <target, say
>> ./build/install/sbin/glusterd>'
>> \t\tgdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core'
>> <target, say ./build/install/sbin/glusterd>
>> + RET=1
>> + '[' 1 -ne 0 ']'
>> + tar -czf <https://build.gluster.org/job/regression-test-with-
>> multiplex/786/artifact/glusterfs-logs.tgz> /var/log/glusterfs
>> /var/log/messages /var/log/messages-20180603 /var/log/messages-20180610
>> /var/log/messages-20180617 /var/log/messages-20180624
>> tar: Removing leading `/' from member names
>> + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i <
>> https://build.gluster.org/job/regression-test-with-multiplex/ws/>
>> glusterfs-logs.tgz _logs_collector at http.int.rht.gluster.org:/var/www/
>> glusterfs-logs/regression-test-with-multiplex-786.tgz
>> ssh: connect to host http.int.rht.gluster.org port 22: Connection timed
>> out
>> lost connection
>> + true
>> + case $(uname -s) in
>> ++ uname -s
>> + /sbin/sysctl -w 'kernel.core_pattern=|/usr/libexec/abrt-hook-ccpp %s
>> %c %p %u %g %t %e %P %I %h'
>> kernel.core_pattern = |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t %e
>> %P %I %h
>> + exit 1
>> Build step 'Execute shell' marked build as failure
>> _______________________________________________
>> maintainers mailing list
>> maintainers at gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>>
> --
> - Atin (atinm)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180701/98320366/attachment-0001.html>


More information about the maintainers mailing list