[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #512

jenkins at build.gluster.org jenkins at build.gluster.org
Fri Jan 5 14:09:40 UTC 2018


See <https://build.gluster.org/job/netbsd-periodic/512/display/redirect?page=changes>

Changes:

[Xavier Hernandez] cluster/ec: OpenFD heal implementation for EC

[Amar Tumballi] tests: Enable geo-rep test cases

[atin] glusterd: connect to an existing brick process when qourum status is

[Pranith Kumar K] dict: add more types for values

------------------------------------------
[...truncated 244.04 KB...]
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "1" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
not ok 26 Got "" instead of "1", LINENUM:45
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 27 Got "" instead of "1", LINENUM:46
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 28 Got "" instead of "1", LINENUM:47
FAILED COMMAND: 1 afr_child_up_status patchy 2
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
ok 34, LINENUM:57
ok 35, LINENUM:60
not ok 36 Got "" instead of "0", LINENUM:61
FAILED COMMAND: 0 stat -c %s /d/backends/patchy2/file2
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 6/40 subtests 

Test Summary Report
-------------------
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 6)
  Failed tests:  25-28, 33, 36
Files=1, Tests=40, 207 wallclock secs ( 0.05 usr  0.02 sys + 19991669.21 cusr 21472532.35 csys = 41464201.63 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "1" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 1/40 subtests 

Test Summary Report
-------------------
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 1)
  Failed test:  25
Files=1, Tests=40, 134 wallclock secs ( 0.03 usr  0.00 sys + 23027625.31 cusr 11768710.58 csys = 34796335.92 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: 1 new core files
End of test ./tests/basic/afr/arbiter-add-brick.t
================================================================================


Run complete
================================================================================
Number of tests found:                             3
Number of tests selected for run based on pattern: 3
Number of tests skipped as they were marked bad:   0
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     3

Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/arbiter-add-brick.t  -  207 second
./tests/basic/afr/add-brick-self-heal.t  -  22 second
./tests/basic/0symbol-check.t  -  0 second

1 test(s) failed 
./tests/basic/afr/arbiter-add-brick.t

1 test(s) generated core 
./tests/basic/afr/arbiter-add-brick.t

Result is 1

tar: Removing leading / from absolute path names in the archive
Cores and build archived in http://nbslave72.cloud.gluster.org/archives/archived_builds/build-install-20180105135602.tgz
Open core using the following command to get a proper stack...
Example: From root of extracted tarball
       gdb -ex 'set sysroot ./'   -ex 'core-file ./build/install/cores/xxx.core'   <target, say ./build/install/sbin/glusterd>
NB: this requires a gdb built with 'NetBSD ELF' osabi support,  which is available natively on a NetBSD-7.0/i386 system
tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20180105135602.tgz
error: fatal: change is closed

fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list