[Gluster-devel] [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #98

Nigel Babu nigelb at redhat.com
Mon Jul 24 07:40:45 UTC 2017


Niels, could you look at this failure? It seems to have started post-merge
of this patch: https://review.gluster.org/#/c/17779/



On Mon, Jul 24, 2017 at 1:04 PM, Atin Mukherjee <amukherj at redhat.com> wrote:

> Bump.
>
> Failures are constant. May I request someone from replicate team to take a
> look at it?
>
> On Fri, Jul 21, 2017 at 7:24 AM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>> Netbsd runs don't go through. add-brick-self-heal.t seems to be
>> generating core.
>>
>> ---------- Forwarded message ---------
>> From: <jenkins at build.gluster.org>
>> Date: Fri, 21 Jul 2017 at 06:20
>> Subject: [Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic
>> #98
>> To: <maintainers at gluster.org>, <jeff at pl.atyp.us>, <srangana at redhat.com>,
>> <vbellur at redhat.com>
>>
>>
>> See <https://build.gluster.org/job/netbsd-periodic/98/display/
>> redirect?page=changes>
>>
>> Changes:
>>
>> [Jeff Darcy] glusterd: fix brick start race
>>
>> [Vijay Bellur] MAINTAINERS: Changes for Maintainers 2.0
>>
>> ------------------------------------------
>> [...truncated 241.43 KB...]
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> ./tests/basic/afr/../../include.rc: line 314: 22640 Segmentation fault
>>     (core dumped) gluster --mode=script --wignore volume set patchy
>> self-heal-daemon on
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
>> or kill -l [sigspec]
>> rm: /build/install/var/run/gluster: is a directory
>> ./tests/basic/afr/../../include.rc: line 314: 27660 Segmentation fault
>>     (core dumped) gluster --mode=script --wignore volume heal patchy
>> ls: /d/backends/patchy2: No such file or directory
>> ls: /d/backends/patchy0: No such file or directory
>> ls: ls: /d/backends/patchy2: No such file or directory
>> /d/backends/patchy1: No such file or directory
>> diff: /d/backends/patchy0/file1.txt: No such file or directory
>> diff: /d/backends/patchy2/file1.txt: No such file or directory
>> ./tests/basic/afr/add-brick-self-heal.t ..
>> 1..34
>> not ok 1 , LINENUM:6
>> FAILED COMMAND: glusterd
>> not ok 2 , LINENUM:7
>> FAILED COMMAND: pidof glusterd
>> not ok 3 , LINENUM:8
>> FAILED COMMAND: gluster --mode=script --wignore volume create patchy
>> replica 2 nbslave72.cloud.gluster.org:/d/backends/patchy0
>> nbslave72.cloud.gluster.org:/d/backends/patchy1
>> not ok 4 , LINENUM:9
>> FAILED COMMAND: gluster --mode=script --wignore volume start patchy
>> not ok 5 , LINENUM:10
>> FAILED COMMAND: gluster --mode=script --wignore volume set patchy
>> cluster.data-self-heal off
>> not ok 6 , LINENUM:11
>> FAILED COMMAND: gluster --mode=script --wignore volume set patchy
>> cluster.metadata-self-heal off
>> not ok 7 , LINENUM:12
>> FAILED COMMAND: gluster --mode=script --wignore volume set patchy
>> cluster.entry-self-heal off
>> not ok 8 , LINENUM:14
>> FAILED COMMAND: gluster --mode=script --wignore volume set patchy
>> self-heal-daemon off
>> not ok 9 , LINENUM:15
>> FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0
>> --volfile-id=patchy --volfile-server=nbslave72.cloud.gluster.org
>> /mnt/glusterfs/0
>> not ok 10 , LINENUM:24
>> FAILED COMMAND: setfattr -n user.test -v qwerty /mnt/glusterfs/0/file5.txt
>> not ok 11 , LINENUM:27
>> FAILED COMMAND: gluster --mode=script --wignore volume add-brick patchy
>> replica 3 nbslave72.cloud.gluster.org:/d/backends/patchy2
>> not ok 12 , LINENUM:30
>> FAILED COMMAND: setfattr -n trusted.afr.patchy-client-0 -v
>> 0x000000000000000000000001 /d/backends/patchy2/
>> not ok 13 , LINENUM:31
>> FAILED COMMAND: setfattr -n trusted.afr.patchy-client-1 -v
>> 0x000000000000000000000001 /d/backends/patchy2/
>> not ok 14 Got "" instead of "000000000000000100000001", LINENUM:34
>> FAILED COMMAND: 000000000000000100000001 get_hex_xattr
>> trusted.afr.patchy-client-2 /d/backends/patchy0
>> not ok 15 Got "" instead of "000000000000000100000001", LINENUM:35
>> FAILED COMMAND: 000000000000000100000001 get_hex_xattr
>> trusted.afr.patchy-client-2 /d/backends/patchy1
>> not ok 16 Got "" instead of "000000000000000000000001", LINENUM:36
>> FAILED COMMAND: 000000000000000000000001 get_hex_xattr trusted.afr.dirty
>> /d/backends/patchy2
>> not ok 17 Got "" instead of "1", LINENUM:38
>> FAILED COMMAND: 1 afr_child_up_status patchy 0
>> not ok 18 Got "" instead of "1", LINENUM:39
>> FAILED COMMAND: 1 afr_child_up_status patchy 1
>> not ok 19 Got "" instead of "1", LINENUM:40
>> FAILED COMMAND: 1 afr_child_up_status patchy 2
>> not ok 20 , LINENUM:42
>> FAILED COMMAND: gluster --mode=script --wignore volume set patchy
>> self-heal-daemon on
>> not ok 21 Got "" instead of "Y", LINENUM:43
>> FAILED COMMAND: Y glustershd_up_status
>> not ok 22 Got "" instead of "1", LINENUM:44
>> FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 0
>> not ok 23 Got "" instead of "1", LINENUM:45
>> FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 1
>> not ok 24 Got "" instead of "1", LINENUM:46
>> FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 2
>> not ok 25 , LINENUM:47
>> FAILED COMMAND: gluster --mode=script --wignore volume heal patchy
>> not ok 26 Got "" instead of "^0$", LINENUM:50
>> FAILED COMMAND: ^0$ get_pending_heal_count patchy
>> ok 27, LINENUM:53
>> ok 28, LINENUM:54
>> not ok 29 , LINENUM:57
>> FAILED COMMAND: diff /d/backends/patchy0/file1.txt
>> /d/backends/patchy2/file1.txt
>> not ok 30 Got "" instead of "qwerty", LINENUM:60
>> FAILED COMMAND: qwerty get_text_xattr user.test
>> /d/backends/patchy2/file5.txt
>> not ok 31 Got "" instead of "qwerty", LINENUM:61
>> FAILED COMMAND: qwerty get_text_xattr user.test
>> /d/backends/patchy0/file5.txt
>> not ok 32 Got "" instead of "000000000000000000000000", LINENUM:63
>> FAILED COMMAND: 000000000000000000000000 get_hex_xattr
>> trusted.afr.patchy-client-2 /d/backends/patchy0
>> not ok 33 Got "" instead of "000000000000000000000000", LINENUM:64
>> FAILED COMMAND: 000000000000000000000000 get_hex_xattr
>> trusted.afr.patchy-client-2 /d/backends/patchy1
>> not ok 34 Got "" instead of "000000000000000000000000", LINENUM:65
>> FAILED COMMAND: 000000000000000000000000 get_hex_xattr trusted.afr.dirty
>> /d/backends/patchy2
>> Failed 32/34 subtests
>>
>> Test Summary Report
>> -------------------
>> ./tests/basic/afr/add-brick-self-heal.t (Wstat: 0 Tests: 34 Failed: 32)
>>   Failed tests:  1-26, 29-34
>> Files=1, Tests=34, 266 wallclock secs ( 0.07 usr  0.03 sys +  4.05 cusr
>> 7.91 csys = 12.06 CPU)
>> Result: FAIL
>> ./tests/basic/afr/add-brick-self-heal.t: 234 new core files
>> End of test ./tests/basic/afr/add-brick-self-heal.t
>> ============================================================
>> ====================
>>
>>
>> Run complete
>> ============================================================
>> ====================
>> Number of tests found:                             2
>> Number of tests selected for run based on pattern: 2
>> Number of tests skipped as they were marked bad:   0
>> Number of tests skipped because of known_issues:   0
>> Number of tests that were run:                     2
>>
>> 1 test(s) failed
>> ./tests/basic/afr/add-brick-self-heal.t
>>
>> 1 test(s) generated core
>> ./tests/basic/afr/add-brick-self-heal.t
>>
>> Tests ordered by time taken, slowest to fastest:
>> ============================================================
>> ====================
>> ./tests/basic/afr/add-brick-self-heal.t  -  266 second
>> ./tests/basic/0symbol-check.t  -  0 second
>>
>> Result is 1
>>
>> tar: Removing leading / from absolute path names in the archive
>> Cores and build archived in http://nbslave72.cloud.gluster
>> .org/archives/archived_builds/build-install-20170721003808.tgz
>> Open core using the following command to get a proper stack...
>> Example: From root of extracted tarball
>>        gdb -ex 'set sysroot ./'   -ex 'core-file
>> ./build/install/cores/xxx.core'   <target, say
>> ./build/install/sbin/glusterd>
>> NB: this requires a gdb built with 'NetBSD ELF' osabi support,  which is
>> available natively on a NetBSD-7.0/i386 system
>> tar: Removing leading / from absolute path names in the archive
>> Logs archived in http://nbslave72.cloud.gluster
>> .org/archives/logs/glusterfs-logs-20170721003808.tgz
>> Build step 'Execute shell' marked build as failure
>> _______________________________________________
>> maintainers mailing list
>> maintainers at gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>> --
>> - Atin (atinm)
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
nigelb
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170724/155cc260/attachment-0001.html>


More information about the Gluster-devel mailing list