[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #511
jenkins at build.gluster.org
jenkins at build.gluster.org
Thu Jan 4 14:11:16 UTC 2018
See <https://build.gluster.org/job/netbsd-periodic/511/display/redirect?page=changes>
Changes:
[Pranith Kumar K] debug/delay-gen: volume option fixes for GD2
[Nithya Balachandran] cli: Fixed a use_after_free
------------------------------------------
[...truncated 250.48 KB...]
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
not ok 26 Got "" instead of "1", LINENUM:45
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 27 Got "" instead of "1", LINENUM:46
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 28 Got "" instead of "1", LINENUM:47
FAILED COMMAND: 1 afr_child_up_status patchy 2
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
ok 34, LINENUM:57
ok 35, LINENUM:60
not ok 36 Got "" instead of "0", LINENUM:61
FAILED COMMAND: 0 stat -c %s /d/backends/patchy2/file2
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 6/40 subtests
Test Summary Report
-------------------
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 6)
Failed tests: 25-28, 33, 36
Files=1, Tests=40, 215 wallclock secs ( 0.04 usr 0.01 sys + 13862104.86 cusr 5125291.88 csys = 18987396.79 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t ..
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
not ok 26 Got "" instead of "1", LINENUM:45
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 27 Got "" instead of "1", LINENUM:46
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 28 Got "" instead of "1", LINENUM:47
FAILED COMMAND: 1 afr_child_up_status patchy 2
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
ok 34, LINENUM:57
ok 35, LINENUM:60
not ok 36 Got "" instead of "0", LINENUM:61
FAILED COMMAND: 0 stat -c %s /d/backends/patchy2/file2
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 6/40 subtests
Test Summary Report
-------------------
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 6)
Failed tests: 25-28, 33, 36
Files=1, Tests=40, 221 wallclock secs ( 0.04 usr 0.02 sys + 6881989.14 cusr 13763970.05 csys = 20645959.25 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: 2 new core files
End of test ./tests/basic/afr/arbiter-add-brick.t
================================================================================
Run complete
================================================================================
Number of tests found: 3
Number of tests selected for run based on pattern: 3
Number of tests skipped as they were marked bad: 0
Number of tests skipped because of known_issues: 0
Number of tests that were run: 3
Tests ordered by time taken, slowest to fastest:
================================================================================
./tests/basic/afr/arbiter-add-brick.t - 216 second
./tests/basic/afr/add-brick-self-heal.t - 20 second
./tests/basic/0symbol-check.t - 0 second
1 test(s) failed
./tests/basic/afr/arbiter-add-brick.t
1 test(s) generated core
./tests/basic/afr/arbiter-add-brick.t
Result is 1
tar: Removing leading / from absolute path names in the archive
Cores and build archived in http://nbslave72.cloud.gluster.org/archives/archived_builds/build-install-20180104135605.tgz
Open core using the following command to get a proper stack...
Example: From root of extracted tarball
gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core' <target, say ./build/install/sbin/glusterd>
NB: this requires a gdb built with 'NetBSD ELF' osabi support, which is available natively on a NetBSD-7.0/i386 system
tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20180104135605.tgz
error: fatal: change is closed
fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list