[Gluster-Maintainers] Build failed in Jenkins: centos8-regression #447
jenkins at build.gluster.org
jenkins at build.gluster.org
Mon Sep 13 16:04:42 UTC 2021
See <https://build.gluster.org/job/centos8-regression/447/display/redirect>
Changes:
------------------------------------------
[...truncated 497.81 KB...]
======================================== (13 / 795) ========================================
[14:49:33] Running tests in file ./tests/000-flaky/features_lock-migration_lkmigration-set-option.t
./tests/000-flaky/features_lock-migration_lkmigration-set-option.t ..
1..15
ok 1 [ 175/ 1979] < 9> 'glusterd'
ok 2 [ 10/ 7] < 10> 'pidof glusterd'
ok 3 [ 10/ 115] < 11> 'gluster --mode=script --wignore volume create patchy builder-c8-1.int.aws.gluster.org:/d/backends/brick1 builder-c8-1.int.aws.gluster.org:/d/backends/brick2'
ok 4 [ 13/ 196] < 12> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 12/ 124] < 14> 'gluster --mode=script --wignore volume set patchy lock-migration on'
ok 6 [ 108/ 3] < 15> 'on echo on'
ok 7 [ 10/ 121] < 16> 'gluster --mode=script --wignore volume set patchy lock-migration off'
ok 8 [ 105/ 5] < 17> 'off echo off'
ok 9 [ 12/ 91] < 18> '! gluster --mode=script --wignore volume set patchy lock-migration garbage'
ok 10 [ 106/ 3] < 20> 'off echo off'
ok 11 [ 12/ 2116] < 23> 'gluster --mode=script --wignore volume stop patchy'
ok 12 [ 10/ 2893] < 24> 'gluster --mode=script --wignore volume delete patchy'
ok 13 [ 12/ 129] < 29> 'gluster --mode=script --wignore volume create patchy replica 2 builder-c8-1.int.aws.gluster.org:/d/backends/brick1 builder-c8-1.int.aws.gluster.org:/d/backends/brick2'
ok 14 [ 14/ 1187] < 30> 'gluster --mode=script --wignore volume start patchy'
ok 15 [ 12/ 95] < 32> '! gluster --mode=script --wignore volume set patchy lock-migration on'
ok
All tests successful.
Files=1, Tests=15, 10 wallclock secs ( 0.02 usr 0.00 sys + 1.11 cusr 0.53 csys = 1.66 CPU)
Result: PASS
Logs preserved in tarball features_lock-migration_lkmigration-set-option-iteration-1.tar.gz
End of test ./tests/000-flaky/features_lock-migration_lkmigration-set-option.t
================================================================================
======================================== (14 / 795) ========================================
[14:49:43] Running tests in file ./tests/000-flaky/glusterd-restart-shd-mux.t
./tests/000-flaky/glusterd-restart-shd-mux.t ..
1..63
ok 1 [ 244/ 2075] < 10> 'glusterd'
ok 2 [ 12/ 13] < 11> 'pidof glusterd'
ok 3 [ 10/ 138] < 12> 'gluster --mode=script --wignore volume create patchy replica 3 builder-c8-1.int.aws.gluster.org:/d/backends/patchy0 builder-c8-1.int.aws.gluster.org:/d/backends/patchy1 builder-c8-1.int.aws.gluster.org:/d/backends/patchy2 builder-c8-1.int.aws.gluster.org:/d/backends/patchy3 builder-c8-1.int.aws.gluster.org:/d/backends/patchy4 builder-c8-1.int.aws.gluster.org:/d/backends/patchy5'
ok 4 [ 14/ 177] < 13> 'gluster --mode=script --wignore volume set patchy cluster.background-self-heal-count 0'
ok 5 [ 12/ 172] < 14> 'gluster --mode=script --wignore volume set patchy cluster.eager-lock off'
ok 6 [ 11/ 160] < 15> 'gluster --mode=script --wignore volume set patchy performance.flush-behind off'
ok 7 [ 12/ 1437] < 16> 'gluster --mode=script --wignore volume start patchy'
ok 8 [ 29/ 184] < 19> 'gluster --mode=script --wignore volume create patchy_afr1 replica 3 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr10 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr11 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr12 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr13 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr14 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr15'
ok 9 [ 12/ 388] < 20> 'gluster --mode=script --wignore volume start patchy_afr1'
ok 10 [ 40/ 163] < 21> 'gluster --mode=script --wignore volume create patchy_ec1 disperse 6 redundancy 2 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec10 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec11 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec12 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec13 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec14 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec15'
ok 11 [ 12/ 381] < 22> 'gluster --mode=script --wignore volume start patchy_ec1'
ok 12 [ 40/ 198] < 19> 'gluster --mode=script --wignore volume create patchy_afr2 replica 3 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr20 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr21 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr22 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr23 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr24 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr25'
ok 13 [ 13/ 390] < 20> 'gluster --mode=script --wignore volume start patchy_afr2'
ok 14 [ 55/ 166] < 21> 'gluster --mode=script --wignore volume create patchy_ec2 disperse 6 redundancy 2 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec20 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec21 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec22 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec23 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec24 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec25'
ok 15 [ 13/ 443] < 22> 'gluster --mode=script --wignore volume start patchy_ec2'
ok 16 [ 39/ 158] < 19> 'gluster --mode=script --wignore volume create patchy_afr3 replica 3 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr30 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr31 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr32 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr33 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr34 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_afr35'
ok 17 [ 13/ 374] < 20> 'gluster --mode=script --wignore volume start patchy_afr3'
ok 18 [ 50/ 180] < 21> 'gluster --mode=script --wignore volume create patchy_ec3 disperse 6 redundancy 2 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec30 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec31 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec32 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec33 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec34 builder-c8-1.int.aws.gluster.org:/d/backends/patchy_ec35'
ok 19 [ 13/ 431] < 22> 'gluster --mode=script --wignore volume start patchy_ec3'
ok 20 [ 29/ 24] < 25> '^1$ shd_count'
ok 21 [ 20/ 10] < 28> 'pkill glusterd'
ok 22 [ 29/ 19] < 30> '^1$ shd_count'
ok 23 [ 12/ 2071] < 31> 'glusterd'
ok 24 [ 12/ 26] < 32> '^1$ shd_count'
ok 25 [ 12/ 6594] < 34> '^18$ number_healer_threads_shd patchy ec_shd_index_healer'
ok 26 [ 14/ 1099] < 36> '^24$ number_healer_threads_shd patchy afr_shd_index_healer'
ok 27 [ 110/ 7] < 41> '^148236$ cat /var/run/gluster/shd/patchy_afr1/patchy_afr1-shd.pid'
ok 28 [ 14/ 6] < 43> '^148236$ cat /var/run/gluster/shd/patchy_ec1/patchy_ec1-shd.pid'
ok 29 [ 13/ 6] < 41> '^148236$ cat /var/run/gluster/shd/patchy_afr2/patchy_afr2-shd.pid'
ok 30 [ 13/ 6] < 43> '^148236$ cat /var/run/gluster/shd/patchy_ec2/patchy_ec2-shd.pid'
ok 31 [ 14/ 6] < 41> '^148236$ cat /var/run/gluster/shd/patchy_afr3/patchy_afr3-shd.pid'
ok 32 [ 14/ 6] < 43> '^148236$ cat /var/run/gluster/shd/patchy_ec3/patchy_ec3-shd.pid'
ok 33 [ 14/ 78] < 47> 'pkill gluster'
ok 34 [ 19/ 16] < 49> '^0$ shd_count'
ok 35 [ 14/ 2065] < 51> 'glusterd'
ok 36 [ 29/ 870] < 52> '^1$ shd_count'
ok 37 [ 13/ 5014] < 55> '^18$ number_healer_threads_shd patchy ec_shd_index_healer'
ok 38 [ 17/ 1070] < 57> '^24$ number_healer_threads_shd patchy afr_shd_index_healer'
ok 39 [ 106/ 5] < 62> '^149107$ cat /var/run/gluster/shd/patchy_afr1/patchy_afr1-shd.pid'
ok 40 [ 12/ 6] < 64> '^149107$ cat /var/run/gluster/shd/patchy_ec1/patchy_ec1-shd.pid'
ok 41 [ 12/ 5] < 62> '^149107$ cat /var/run/gluster/shd/patchy_afr2/patchy_afr2-shd.pid'
ok 42 [ 12/ 5] < 64> '^149107$ cat /var/run/gluster/shd/patchy_ec2/patchy_ec2-shd.pid'
ok 43 [ 12/ 5] < 62> '^149107$ cat /var/run/gluster/shd/patchy_afr3/patchy_afr3-shd.pid'
ok 44 [ 12/ 5] < 64> '^149107$ cat /var/run/gluster/shd/patchy_ec3/patchy_ec3-shd.pid'
ok 45 [ 13/ 6129] < 68> 'gluster --mode=script --wignore volume stop patchy_afr1'
ok 46 [ 12/ 6126] < 69> 'gluster --mode=script --wignore volume stop patchy_ec1'
ok 47 [ 12/ 6174] < 68> 'gluster --mode=script --wignore volume stop patchy_afr2'
ok 48 [ 12/ 6124] < 69> 'gluster --mode=script --wignore volume stop patchy_ec2'
ok 49 [ 12/ 6128] < 68> 'gluster --mode=script --wignore volume stop patchy_afr3'
ok 50 [ 12/ 6124] < 69> 'gluster --mode=script --wignore volume stop patchy_ec3'
ok 51 [ 12/ 1034] < 72> '^6$ number_healer_threads_shd patchy afr_shd_index_healer'
ok 52 [ 12/ 80] < 74> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-id=/patchy --volfile-server=builder-c8-1.int.aws.gluster.org /mnt/glusterfs/0'
ok 53 [ 12/ 113] < 76> 'kill_brick patchy builder-c8-1.int.aws.gluster.org /d/backends/patchy0'
ok 54 [ 12/ 110] < 77> 'kill_brick patchy builder-c8-1.int.aws.gluster.org /d/backends/patchy3'
ok 55 [ 15/ 997] < 79> 'touch /mnt/glusterfs/0/foo1 /mnt/glusterfs/0/foo2 /mnt/glusterfs/0/foo3 /mnt/glusterfs/0/foo4 /mnt/glusterfs/0/foo5 /mnt/glusterfs/0/foo6 /mnt/glusterfs/0/foo7 /mnt/glusterfs/0/foo8 /mnt/glusterfs/0/foo9 /mnt/glusterfs/0/foo10 /mnt/glusterfs/0/foo11 /mnt/glusterfs/0/foo12 /mnt/glusterfs/0/foo13 /mnt/glusterfs/0/foo14 /mnt/glusterfs/0/foo15 /mnt/glusterfs/0/foo16 /mnt/glusterfs/0/foo17 /mnt/glusterfs/0/foo18 /mnt/glusterfs/0/foo19 /mnt/glusterfs/0/foo20 /mnt/glusterfs/0/foo21 /mnt/glusterfs/0/foo22 /mnt/glusterfs/0/foo23 /mnt/glusterfs/0/foo24 /mnt/glusterfs/0/foo25 /mnt/glusterfs/0/foo26 /mnt/glusterfs/0/foo27 /mnt/glusterfs/0/foo28 /mnt/glusterfs/0/foo29 /mnt/glusterfs/0/foo30 /mnt/glusterfs/0/foo31 /mnt/glusterfs/0/foo32 /mnt/glusterfs/0/foo33 /mnt/glusterfs/0/foo34 /mnt/glusterfs/0/foo35 /mnt/glusterfs/0/foo36 /mnt/glusterfs/0/foo37 /mnt/glusterfs/0/foo38 /mnt/glusterfs/0/foo39 /mnt/glusterfs/0/foo40 /mnt/glusterfs/0/foo41 /mnt/glusterfs/0/foo42 /mnt/glusterfs/0/foo43 /mnt/glusterfs/0/foo44 /mnt/glusterfs/0/foo45 /mnt/glusterfs/0/foo46 /mnt/glusterfs/0/foo47 /mnt/glusterfs/0/foo48 /mnt/glusterfs/0/foo49 /mnt/glusterfs/0/foo50 /mnt/glusterfs/0/foo51 /mnt/glusterfs/0/foo52 /mnt/glusterfs/0/foo53 /mnt/glusterfs/0/foo54 /mnt/glusterfs/0/foo55 /mnt/glusterfs/0/foo56 /mnt/glusterfs/0/foo57 /mnt/glusterfs/0/foo58 /mnt/glusterfs/0/foo59 /mnt/glusterfs/0/foo60 /mnt/glusterfs/0/foo61 /mnt/glusterfs/0/foo62 /mnt/glusterfs/0/foo63 /mnt/glusterfs/0/foo64 /mnt/glusterfs/0/foo65 /mnt/glusterfs/0/foo66 /mnt/glusterfs/0/foo67 /mnt/glusterfs/0/foo68 /mnt/glusterfs/0/foo69 /mnt/glusterfs/0/foo70 /mnt/glusterfs/0/foo71 /mnt/glusterfs/0/foo72 /mnt/glusterfs/0/foo73 /mnt/glusterfs/0/foo74 /mnt/glusterfs/0/foo75 /mnt/glusterfs/0/foo76 /mnt/glusterfs/0/foo77 /mnt/glusterfs/0/foo78 /mnt/glusterfs/0/foo79 /mnt/glusterfs/0/foo80 /mnt/glusterfs/0/foo81 /mnt/glusterfs/0/foo82 /mnt/glusterfs/0/foo83 /mnt/glusterfs/0/foo84 /mnt/glusterfs/0/foo85 /mnt/glusterfs/0/foo86 /mnt/glusterfs/0/foo87 /mnt/glusterfs/0/foo88 /mnt/glusterfs/0/foo89 /mnt/glusterfs/0/foo90 /mnt/glusterfs/0/foo91 /mnt/glusterfs/0/foo92 /mnt/glusterfs/0/foo93 /mnt/glusterfs/0/foo94 /mnt/glusterfs/0/foo95 /mnt/glusterfs/0/foo96 /mnt/glusterfs/0/foo97 /mnt/glusterfs/0/foo98 /mnt/glusterfs/0/foo99 /mnt/glusterfs/0/foo100'
ok 56 [ 12/ 317] < 81> '^204$ get_pending_heal_count patchy'
ok 57 [ 14/ 183] < 83> 'gluster --mode=script --wignore volume start patchy force'
ok 58 [ 14/ 4299] < 85> '^0$ get_pending_heal_count patchy'
ok 59 [ 28/ 625] < 87> 'rm -rf /mnt/glusterfs/0/foo1 /mnt/glusterfs/0/foo10 /mnt/glusterfs/0/foo100 /mnt/glusterfs/0/foo11 /mnt/glusterfs/0/foo12 /mnt/glusterfs/0/foo13 /mnt/glusterfs/0/foo14 /mnt/glusterfs/0/foo15 /mnt/glusterfs/0/foo16 /mnt/glusterfs/0/foo17 /mnt/glusterfs/0/foo18 /mnt/glusterfs/0/foo19 /mnt/glusterfs/0/foo2 /mnt/glusterfs/0/foo20 /mnt/glusterfs/0/foo21 /mnt/glusterfs/0/foo22 /mnt/glusterfs/0/foo23 /mnt/glusterfs/0/foo24 /mnt/glusterfs/0/foo25 /mnt/glusterfs/0/foo26 /mnt/glusterfs/0/foo27 /mnt/glusterfs/0/foo28 /mnt/glusterfs/0/foo29 /mnt/glusterfs/0/foo3 /mnt/glusterfs/0/foo30 /mnt/glusterfs/0/foo31 /mnt/glusterfs/0/foo32 /mnt/glusterfs/0/foo33 /mnt/glusterfs/0/foo34 /mnt/glusterfs/0/foo35 /mnt/glusterfs/0/foo36 /mnt/glusterfs/0/foo37 /mnt/glusterfs/0/foo38 /mnt/glusterfs/0/foo39 /mnt/glusterfs/0/foo4 /mnt/glusterfs/0/foo40 /mnt/glusterfs/0/foo41 /mnt/glusterfs/0/foo42 /mnt/glusterfs/0/foo43 /mnt/glusterfs/0/foo44 /mnt/glusterfs/0/foo45 /mnt/glusterfs/0/foo46 /mnt/glusterfs/0/foo47 /mnt/glusterfs/0/foo48 /mnt/glusterfs/0/foo49 /mnt/glusterfs/0/foo5 /mnt/glusterfs/0/foo50 /mnt/glusterfs/0/foo51 /mnt/glusterfs/0/foo52 /mnt/glusterfs/0/foo53 /mnt/glusterfs/0/foo54 /mnt/glusterfs/0/foo55 /mnt/glusterfs/0/foo56 /mnt/glusterfs/0/foo57 /mnt/glusterfs/0/foo58 /mnt/glusterfs/0/foo59 /mnt/glusterfs/0/foo6 /mnt/glusterfs/0/foo60 /mnt/glusterfs/0/foo61 /mnt/glusterfs/0/foo62 /mnt/glusterfs/0/foo63 /mnt/glusterfs/0/foo64 /mnt/glusterfs/0/foo65 /mnt/glusterfs/0/foo66 /mnt/glusterfs/0/foo67 /mnt/glusterfs/0/foo68 /mnt/glusterfs/0/foo69 /mnt/glusterfs/0/foo7 /mnt/glusterfs/0/foo70 /mnt/glusterfs/0/foo71 /mnt/glusterfs/0/foo72 /mnt/glusterfs/0/foo73 /mnt/glusterfs/0/foo74 /mnt/glusterfs/0/foo75 /mnt/glusterfs/0/foo76 /mnt/glusterfs/0/foo77 /mnt/glusterfs/0/foo78 /mnt/glusterfs/0/foo79 /mnt/glusterfs/0/foo8 /mnt/glusterfs/0/foo80 /mnt/glusterfs/0/foo81 /mnt/glusterfs/0/foo82 /mnt/glusterfs/0/foo83 /mnt/glusterfs/0/foo84 /mnt/glusterfs/0/foo85 /mnt/glusterfs/0/foo86 /mnt/glusterfs/0/foo87 /mnt/glusterfs/0/foo88 /mnt/glusterfs/0/foo89 /mnt/glusterfs/0/foo9 /mnt/glusterfs/0/foo90 /mnt/glusterfs/0/foo91 /mnt/glusterfs/0/foo92 /mnt/glusterfs/0/foo93 /mnt/glusterfs/0/foo94 /mnt/glusterfs/0/foo95 /mnt/glusterfs/0/foo96 /mnt/glusterfs/0/foo97 /mnt/glusterfs/0/foo98 /mnt/glusterfs/0/foo99'
ok 60 [ 12/ 12] < 88> 'Y force_umount /mnt/glusterfs/0'
ok 61 [ 12/ 6125] < 91> 'gluster --mode=script --wignore volume stop patchy'
ok 62 [ 12/ 3112] < 92> 'gluster --mode=script --wignore volume delete patchy'
ok 63 [ 13/ 27] < 94> '^0$ shd_count'
ok
All tests successful.
Files=1, Tests=63, 83 wallclock secs ( 0.03 usr 0.01 sys + 13.68 cusr 5.12 csys = 18.84 CPU)
Result: PASS
Logs preserved in tarball glusterd-restart-shd-mux-iteration-1.tar.gz
End of test ./tests/000-flaky/glusterd-restart-shd-mux.t
================================================================================
======================================== (15 / 795) ========================================
[14:51:06] Running tests in file ./tests/00-geo-rep/00-georep-verify-non-root-setup.t
Timeout set is 900, default 200
Logs preserved in tarball 00-georep-verify-non-root-setup-iteration-1.tar.gz
./tests/00-geo-rep/00-georep-verify-non-root-setup.t timed out after 900 seconds
./tests/00-geo-rep/00-georep-verify-non-root-setup.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball 00-georep-verify-non-root-setup-iteration-1.tar.gz
./tests/00-geo-rep/00-georep-verify-non-root-setup.t timed out after 900 seconds
End of test ./tests/00-geo-rep/00-georep-verify-non-root-setup.t
================================================================================
======================================== (16 / 795) ========================================
[15:21:06] Running tests in file ./tests/00-geo-rep/00-georep-verify-setup.t
Timeout set is 400, default 200
Logs preserved in tarball 00-georep-verify-setup-iteration-1.tar.gz
./tests/00-geo-rep/00-georep-verify-setup.t timed out after 400 seconds
./tests/00-geo-rep/00-georep-verify-setup.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball 00-georep-verify-setup-iteration-1.tar.gz
./tests/00-geo-rep/00-georep-verify-setup.t timed out after 400 seconds
End of test ./tests/00-geo-rep/00-georep-verify-setup.t
================================================================================
======================================== (17 / 795) ========================================
[15:34:26] Running tests in file ./tests/00-geo-rep/01-georep-glusterd-tests.t
Timeout set is 300, default 200
Logs preserved in tarball 01-georep-glusterd-tests-iteration-1.tar.gz
./tests/00-geo-rep/01-georep-glusterd-tests.t timed out after 300 seconds
./tests/00-geo-rep/01-georep-glusterd-tests.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball 01-georep-glusterd-tests-iteration-1.tar.gz
./tests/00-geo-rep/01-georep-glusterd-tests.t timed out after 300 seconds
End of test ./tests/00-geo-rep/01-georep-glusterd-tests.t
================================================================================
======================================== (18 / 795) ========================================
[15:44:26] Running tests in file ./tests/00-geo-rep/bug-1600145.t
Timeout set is 600, default 200
Logs preserved in tarball bug-1600145-iteration-1.tar.gz
./tests/00-geo-rep/bug-1600145.t timed out after 600 seconds
./tests/00-geo-rep/bug-1600145.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
FATAL: command execution failed
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2798)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3273)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:933)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:395)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: java.io.IOException: Backing channel 'builder-c8-1.int.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:216)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:286)
at com.sun.proxy.$Proxy81.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1214)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1206)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:21)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:808)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:516)
at hudson.model.Run.execute(Run.java:1889)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:100)
at hudson.model.Executor.run(Executor.java:433)
FATAL: Unable to delete script file /tmp/jenkins2352764605617671370.sh
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2798)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3273)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:933)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:395)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 4d18d0aa:builder-c8-1.int.aws.gluster.org": Remote call on builder-c8-1.int.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:994)
at hudson.FilePath.act(FilePath.java:1167)
at hudson.FilePath.act(FilePath.java:1156)
at hudson.FilePath.delete(FilePath.java:1680)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:21)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:808)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:516)
at hudson.model.Run.execute(Run.java:1889)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:100)
at hudson.model.Executor.run(Executor.java:433)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: null
java.lang.NullPointerException
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.tempDir(UnbindableDir.java:67)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:62)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:23)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:84)
at org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:111)
at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:556)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:520)
at hudson.model.Run.execute(Run.java:1889)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:100)
at hudson.model.Executor.run(Executor.java:433)
ERROR: builder-c8-1.int.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list