[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3855

jenkins at build.gluster.org jenkins at build.gluster.org
Thu Feb 15 14:31:37 UTC 2018


See <https://build.gluster.org/job/regression-test-burn-in/3855/display/redirect?page=changes>

Changes:

[Xavier Hernandez] tests: bring option of per test timeout

------------------------------------------
[...truncated 478.56 KB...]
not ok 43 , LINENUM:50
FAILED COMMAND: gluster --mode=script --wignore volume create patchy-vol09 replica 2 builder106.cloud.gluster.org:/d/backends/vol09/brick0 builder106.cloud.gluster.org:/d/backends/vol09/brick1 builder106.cloud.gluster.org:/d/backends/vol09/brick2 builder106.cloud.gluster.org:/d/backends/vol09/brick3 builder106.cloud.gluster.org:/d/backends/vol09/brick4 builder106.cloud.gluster.org:/d/backends/vol09/brick5
volume start: patchy-vol09: failed: Volume patchy-vol09 does not exist
not ok 44 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol09
not ok 45 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol09
not ok 46 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s builder106.cloud.gluster.org --volfile-id=patchy-vol09 /mnt/glusterfs/vol09
ok 47, LINENUM:83
ok 48, LINENUM:50
not ok 49 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol10
not ok 50 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol10
not ok 51 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s builder106.cloud.gluster.org --volfile-id=patchy-vol10 /mnt/glusterfs/vol10
ok 52, LINENUM:83
not ok 53 , LINENUM:50
FAILED COMMAND: gluster --mode=script --wignore volume create patchy-vol11 replica 2 builder106.cloud.gluster.org:/d/backends/vol11/brick0 builder106.cloud.gluster.org:/d/backends/vol11/brick1 builder106.cloud.gluster.org:/d/backends/vol11/brick2 builder106.cloud.gluster.org:/d/backends/vol11/brick3 builder106.cloud.gluster.org:/d/backends/vol11/brick4 builder106.cloud.gluster.org:/d/backends/vol11/brick5
not ok 54 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol11
not ok 55 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol11
not ok 56 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s builder106.cloud.gluster.org --volfile-id=patchy-vol11 /mnt/glusterfs/vol11
ok 57, LINENUM:83
ok 58, LINENUM:50
ok 59, LINENUM:51
ok 60, LINENUM:53
ok 61, LINENUM:56
ok 62, LINENUM:83
ok 63, LINENUM:50
ok 64, LINENUM:51
ok 65, LINENUM:53
ok 66, LINENUM:56
ok 67, LINENUM:83
ok 68, LINENUM:50
ok 69, LINENUM:51
ok 70, LINENUM:53
ok 71, LINENUM:56
ok 72, LINENUM:83
ok 73, LINENUM:50
ok 74, LINENUM:51
ok 75, LINENUM:53
ok 76, LINENUM:56
ok 77, LINENUM:83
ok 78, LINENUM:50
ok 79, LINENUM:51
ok 80, LINENUM:53
ok 81, LINENUM:56
ok 82, LINENUM:83
ok 83, LINENUM:50
ok 84, LINENUM:51
ok 85, LINENUM:53
ok 86, LINENUM:56
ok 87, LINENUM:83
ok 88, LINENUM:50
ok 89, LINENUM:51
ok 90, LINENUM:53
ok 91, LINENUM:56
ok 92, LINENUM:83
ok 93, LINENUM:50
ok 94, LINENUM:51
ok 95, LINENUM:53
ok 96, LINENUM:56
ok 97, LINENUM:83
ok 98, LINENUM:50
ok 99, LINENUM:51
ok 100, LINENUM:53
ok 101, LINENUM:56
ok 102, LINENUM:83
ok 103, LINENUM:87
ok 104, LINENUM:89
ok 105, LINENUM:95
rm: cannot remove ‘/mnt/glusterfs/0’: Is a directory
Aborting.

/mnt/nfs/1 could not be deleted, here are the left over items
drwxr-xr-x. 2 root root 4096 Feb 15 14:26 /mnt/glusterfs/0

Please correct the problem and try again.

Dubious, test returned 1 (wstat 256, 0x100)
Failed 15/105 subtests 

Test Summary Report
-------------------
./tests/bugs/core/bug-1432542-mpx-restart-crash.t (Wstat: 256 Tests: 105 Failed: 15)
  Failed tests:  30-31, 40-41, 43-46, 49-51, 53-56
  Non-zero exit status: 1
Files=1, Tests=105, 294 wallclock secs ( 0.06 usr  0.01 sys + 15.12 cusr  8.86 csys = 24.05 CPU)
Result: FAIL
./tests/bugs/core/bug-1432542-mpx-restart-crash.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/bugs/core/bug-1432542-mpx-restart-crash.t .. 
1..105
ok 1, LINENUM:74
ok 2, LINENUM:75
ok 3, LINENUM:50
ok 4, LINENUM:51
ok 5, LINENUM:53
ok 6, LINENUM:56
ok 7, LINENUM:83
ok 8, LINENUM:50
ok 9, LINENUM:51
ok 10, LINENUM:53
ok 11, LINENUM:56
ok 12, LINENUM:83
ok 13, LINENUM:50
ok 14, LINENUM:51
ok 15, LINENUM:53
ok 16, LINENUM:56
ok 17, LINENUM:83
ok 18, LINENUM:50
ok 19, LINENUM:51
ok 20, LINENUM:53
ok 21, LINENUM:56
ok 22, LINENUM:83
ok 23, LINENUM:50
ok 24, LINENUM:51
ok 25, LINENUM:53
ok 26, LINENUM:56
ok 27, LINENUM:83
ok 28, LINENUM:50
ok 29, LINENUM:51
ok 30, LINENUM:53
ok 31, LINENUM:56
ok 32, LINENUM:83
ok 33, LINENUM:50
ok 34, LINENUM:51
ok 35, LINENUM:53
ok 36, LINENUM:56
ok 37, LINENUM:83
ok 38, LINENUM:50
ok 39, LINENUM:51
ok 40, LINENUM:53
ok 41, LINENUM:56
ok 42, LINENUM:83
ok 43, LINENUM:50
ok 44, LINENUM:51
ok 45, LINENUM:53
ok 46, LINENUM:56
ok 47, LINENUM:83
ok 48, LINENUM:50
ok 49, LINENUM:51
ok 50, LINENUM:53
ok 51, LINENUM:56
ok 52, LINENUM:83
ok 53, LINENUM:50
ok 54, LINENUM:51
ok 55, LINENUM:53
ok 56, LINENUM:56
ok 57, LINENUM:83
ok 58, LINENUM:50
ok 59, LINENUM:51
ok 60, LINENUM:53
ok 61, LINENUM:56
ok 62, LINENUM:83
ok 63, LINENUM:50
ok 64, LINENUM:51
ok 65, LINENUM:53
ok 66, LINENUM:56
ok 67, LINENUM:83
ok 68, LINENUM:50
ok 69, LINENUM:51
ok 70, LINENUM:53
ok 71, LINENUM:56
ok 72, LINENUM:83
ok 73, LINENUM:50
ok 74, LINENUM:51
ok 75, LINENUM:53
ok 76, LINENUM:56
ok 77, LINENUM:83
ok 78, LINENUM:50
ok 79, LINENUM:51
ok 80, LINENUM:53
ok 81, LINENUM:56
ok 82, LINENUM:83
ok 83, LINENUM:50
ok 84, LINENUM:51
ok 85, LINENUM:53
ok 86, LINENUM:56
ok 87, LINENUM:83
ok 88, LINENUM:50
ok 89, LINENUM:51
ok 90, LINENUM:53
ok 91, LINENUM:56
ok 92, LINENUM:83
ok 93, LINENUM:50
ok 94, LINENUM:51
ok 95, LINENUM:53
ok 96, LINENUM:56
ok 97, LINENUM:83
ok 98, LINENUM:50
ok 99, LINENUM:51
ok 100, LINENUM:53
ok 101, LINENUM:56
ok 102, LINENUM:83
ok 103, LINENUM:87
ok 104, LINENUM:89
ok 105, LINENUM:95
ok
All tests successful.
Files=1, Tests=105, 198 wallclock secs ( 0.06 usr  0.00 sys + 10.35 cusr  6.18 csys = 16.59 CPU)
Result: PASS
End of test ./tests/bugs/core/bug-1432542-mpx-restart-crash.t
================================================================================


Run complete
================================================================================
Number of tests found:                             607
Number of tests selected for run based on pattern: 3
Number of tests skipped as they were marked bad:   0
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     3

Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/lk-quorum.t  -  300 second
./tests/bugs/core/bug-1432542-mpx-restart-crash.t  -  294 second
./tests/basic/ec/ec-1468261.t  -  113 second

2 test(s) failed 
./tests/basic/afr/lk-quorum.t
./tests/basic/ec/ec-1468261.t

0 test(s) generated core 


3 test(s) needed retry 
./tests/basic/afr/lk-quorum.t
./tests/basic/ec/ec-1468261.t
./tests/bugs/core/bug-1432542-mpx-restart-crash.t

Result is 124

tar: Removing leading `/' from member names
ssh: connect to host http.int.rht.gluster.org port 22: Connection timed out
lost connection
kernel.core_pattern = /%e-%p.core
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list