[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #505

jenkins at build.gluster.org jenkins at build.gluster.org
Fri Dec 29 14:07:23 UTC 2017


See <https://build.gluster.org/job/netbsd-periodic/505/display/redirect?page=changes>

Changes:

[Aravinda VK] geo-rep: Log message improvements

[Poornima] quiesce: add fallocate and seek fops

[atin] snapshot : after brick reset/replace snapshot creation fails

[Pranith Kumar K] mgmt/glusterd: Adding validation for setting quorum-count

------------------------------------------
[...truncated 241.47 KB...]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
rm: /build/install/var/run/gluster: is a directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
stat: /d/backends/patchy2/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
not ok 12 , LINENUM:21
FAILED COMMAND: dd if=/dev/urandom of=/mnt/glusterfs/0/file1 bs=1024 count=1024
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
not ok 26 Got "" instead of "1", LINENUM:45
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 27 Got "" instead of "1", LINENUM:46
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 28 Got "" instead of "1", LINENUM:47
FAILED COMMAND: 1 afr_child_up_status patchy 2
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
ok 34, LINENUM:57
ok 35, LINENUM:60
not ok 36 Got "" instead of "0", LINENUM:61
FAILED COMMAND: 0 stat -c %s /d/backends/patchy2/file2
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 6/40 subtests 

Test Summary Report
-------------------
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 6)
  Failed tests:  12, 26-28, 33, 36
Files=1, Tests=40, 129 wallclock secs ( 0.06 usr  0.00 sys +  3.30 cusr  4.95 csys =  8.31 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
ok
All tests successful.
Files=1, Tests=40, 76 wallclock secs ( 0.05 usr  0.01 sys +  2.18 cusr  3.15 csys =  5.39 CPU)
Result: PASS
./tests/basic/afr/arbiter-add-brick.t: 1 new core files
End of test ./tests/basic/afr/arbiter-add-brick.t
================================================================================


Run complete
================================================================================
Number of tests found:                             3
Number of tests selected for run based on pattern: 3
Number of tests skipped as they were marked bad:   0
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     3

Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/arbiter-add-brick.t  -  129 second
./tests/basic/afr/add-brick-self-heal.t  -  20 second
./tests/basic/0symbol-check.t  -  0 second

0 test(s) failed 


1 test(s) generated core 
./tests/basic/afr/arbiter-add-brick.t

Result is 1

tar: Removing leading / from absolute path names in the archive
Cores and build archived in http://nbslave72.cloud.gluster.org/archives/archived_builds/build-install-20171229135616.tgz
Open core using the following command to get a proper stack...
Example: From root of extracted tarball
       gdb -ex 'set sysroot ./'   -ex 'core-file ./build/install/cores/xxx.core'   <target, say ./build/install/sbin/glusterd>
NB: this requires a gdb built with 'NetBSD ELF' osabi support,  which is available natively on a NetBSD-7.0/i386 system
tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20171229135616.tgz
error: fatal: change is closed

fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list