[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #521
jenkins at build.gluster.org
jenkins at build.gluster.org
Sun Jan 14 14:21:00 UTC 2018
See <https://build.gluster.org/job/netbsd-periodic/521/display/redirect>
------------------------------------------
[...truncated 272.91 KB...]
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t ..
1..29
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:19
ok 9, LINENUM:20
ok 10, LINENUM:22
ok 11, LINENUM:25
ok 12, LINENUM:34
ok 13, LINENUM:39
ok 14, LINENUM:39
ok 15, LINENUM:43
ok 16, LINENUM:46
ok 17, LINENUM:47
ok 18, LINENUM:48
ok 19, LINENUM:51
ok 20, LINENUM:52
ok 21, LINENUM:53
ok 22, LINENUM:54
ok 23, LINENUM:60
ok 24, LINENUM:63
ok 25, LINENUM:68
ok 26, LINENUM:68
ok 27, LINENUM:72
ok 28, LINENUM:73
ok 29, LINENUM:74
ok
All tests successful.
Files=1, Tests=29, 24 wallclock secs ( 0.06 usr 0.00 sys + 2783729.25 cusr 5010696.65 csys = 7794425.96 CPU)
Result: PASS
End of test ./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t
================================================================================
================================================================================
[14:18:09] Running tests in file ./tests/basic/afr/granular-esh/replace-brick.t
./tests/basic/afr/granular-esh/replace-brick.t ..
1..34
ok 1, LINENUM:7
ok 2, LINENUM:8
ok 3, LINENUM:9
ok 4, LINENUM:10
ok 5, LINENUM:11
ok 6, LINENUM:12
ok 7, LINENUM:13
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:17
ok 11, LINENUM:26
ok 12, LINENUM:29
ok 13, LINENUM:32
ok 14, LINENUM:35
ok 15, LINENUM:38
ok 16, LINENUM:41
ok 17, LINENUM:43
ok 18, LINENUM:44
ok 19, LINENUM:46
ok 20, LINENUM:47
ok 21, LINENUM:48
ok 22, LINENUM:49
ok 23, LINENUM:50
ok 24, LINENUM:53
ok 25, LINENUM:56
ok 26, LINENUM:59
ok 27, LINENUM:60
ok 28, LINENUM:63
ok 29, LINENUM:65
ok 30, LINENUM:68
ok 31, LINENUM:69
ok 32, LINENUM:71
ok 33, LINENUM:72
ok 34, LINENUM:73
ok
All tests successful.
Files=1, Tests=34, 23 wallclock secs ( 0.03 usr 0.01 sys + 2.00 cusr 3.20 csys = 5.24 CPU)
Result: PASS
End of test ./tests/basic/afr/granular-esh/replace-brick.t
================================================================================
================================================================================
[14:18:32] Running tests in file ./tests/basic/afr/heal-info.t
./tests/basic/afr/heal-info.t ..
1..9
ok 1, LINENUM:21
ok 2, LINENUM:22
ok 3, LINENUM:23
ok 4, LINENUM:24
ok 5, LINENUM:25
ok 6, LINENUM:26
ok 7, LINENUM:27
ok 8, LINENUM:33
ok 9, LINENUM:34
ok
All tests successful.
Files=1, Tests=9, 37 wallclock secs ( 0.04 usr 0.00 sys + 4.38 cusr 5.21 csys = 9.63 CPU)
Result: PASS
End of test ./tests/basic/afr/heal-info.t
================================================================================
================================================================================
[14:19:10] Running tests in file ./tests/basic/afr/heal-quota.t
touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/26041/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t ..
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests
Test Summary Report
-------------------
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
Failed test: 11
Files=1, Tests=19, 20 wallclock secs ( 0.05 usr 0.00 sys + 1.74 cusr 2.80 csys = 4.59 CPU)
Result: FAIL
./tests/basic/afr/heal-quota.t: bad status 1
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/11971/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t ..
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests
Test Summary Report
-------------------
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
Failed test: 11
Files=1, Tests=19, 24 wallclock secs ( 0.03 usr 0.00 sys + 1.74 cusr 2.86 csys = 4.63 CPU)
Result: FAIL
./tests/basic/afr/heal-quota.t: 4 new core files
End of test ./tests/basic/afr/heal-quota.t
================================================================================
Run complete
================================================================================
Number of tests found: 26
Number of tests selected for run based on pattern: 26
Number of tests skipped as they were marked bad: 1
Number of tests skipped because of known_issues: 0
Number of tests that were run: 25
Tests ordered by time taken, slowest to fastest:
================================================================================
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - 158 second
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t - 114 second
./tests/basic/afr/entry-self-heal.t - 110 second
./tests/basic/afr/arbiter-add-brick.t - 98 second
./tests/basic/afr/granular-esh/conservative-merge.t - 50 second
./tests/basic/afr/arbiter.t - 48 second
./tests/basic/afr/gfid-self-heal.t - 39 second
./tests/basic/afr/heal-info.t - 37 second
./tests/basic/afr/durability-off.t - 37 second
./tests/basic/afr/arbiter-remove-brick.t - 31 second
./tests/basic/afr/arbiter-mount.t - 31 second
./tests/basic/afr/granular-esh/granular-esh.t - 27 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t - 24 second
./tests/basic/afr/granular-esh/replace-brick.t - 23 second
./tests/basic/afr/granular-esh/add-brick.t - 22 second
./tests/basic/afr/add-brick-self-heal.t - 21 second
./tests/basic/afr/client-side-heal.t - 20 second
./tests/basic/afr/heal-quota.t - 20 second
./tests/basic/afr/data-self-heal.t - 19 second
./tests/basic/afr/gfid-heal.t - 16 second
./tests/basic/afr/arbiter-statfs.t - 13 second
./tests/basic/afr/compounded-write-txns.t - 12 second
./tests/basic/afr/gfid-mismatch.t - 10 second
./tests/basic/afr/arbiter-cli.t - 4 second
./tests/basic/0symbol-check.t - 0 second
1 test(s) failed
./tests/basic/afr/heal-quota.t
1 test(s) generated core
./tests/basic/afr/heal-quota.t
Result is 1
tar: Removing leading / from absolute path names in the archive
Cores and build archived in http://nbslave7c.cloud.gluster.org/archives/archived_builds/build-install-20180114140303.tgz
Open core using the following command to get a proper stack...
Example: From root of extracted tarball
gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core' <target, say ./build/install/sbin/glusterd>
NB: this requires a gdb built with 'NetBSD ELF' osabi support, which is available natively on a NetBSD-7.0/i386 system
tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave7c.cloud.gluster.org/archives/logs/glusterfs-logs-20180114140303.tgz
error: fatal: change is closed
fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list