[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #5035
jenkins at build.gluster.org
jenkins at build.gluster.org
Tue Apr 21 18:51:11 UTC 2020
See <https://build.gluster.org/job/regression-test-burn-in/5035/display/redirect?page=changes>
Changes:
[Amar Tumballi] dht/rebalance - fixing recursive failure issue
------------------------------------------
[...truncated 324.47 KB...]
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
ok 27 [ 13/ 15256] < 102> '0 check_common_secret_file'
ok 28 [ 14/ 1584] < 105> '0 check_keys_distributed'
ok 29 [ 14/ 802] < 108> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave start'
ok 30 [ 26/ 1272] < 111> 'gluster --mode=script --wignore volume geo-replication status'
ok 31 [ 20/ 1785] < 114> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 start force'
ok 32 [ 131/ 6055] < 118> '! gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 create push-pem'
ok 33 [ 19/ 3154] < 120> '! gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave1 create push-pem'
ok 34 [ 12/ 3025] < 121> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 create push-pem force'
ok 35 [ 15/ 736] < 124> '1 check_status_num_rows Active'
ok 36 [ 17/ 699] < 125> '2 check_status_num_rows Passive'
ok 37 [ 17/ 1382] < 127> '2 check_fanout_status_num_rows Active'
ok 38 [ 37/ 1241] < 128> '4 check_fanout_status_num_rows Passive'
ok 39 [ 14/ 1255] < 130> '2 check_fanout_status_detail_num_rows Active'
ok 40 [ 23/ 1041] < 131> '4 check_fanout_status_detail_num_rows Passive'
ok 41 [ 11/ 950] < 133> '2 check_all_status_num_rows Active'
ok 42 [ 11/ 942] < 134> '4 check_all_status_num_rows Passive'
ok 43 [ 11/ 948] < 136> '2 check_all_status_detail_num_rows Active'
ok 44 [ 11/ 1070] < 137> '4 check_all_status_detail_num_rows Passive'
ok 45 [ 21/ 254] < 144> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config checkpoint now'
ok 46 [ 11/ 498] < 145> '0 verify_checkpoint_met master 127.0.0.1::slave'
ok 47 [ 14/ 14855] < 147> '1 verify_checkpoint_met master 127.0.0.1::slave'
ok 48 [ 11/ 196] < 151> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config'
ok 49 [ 11/ 233] < 152> '! gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config arsync-options -W'
ok 50 [ 11/ 1756] < 153> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config rsync-options -W'
ok 51 [ 23/ 299] < 154> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config rsync-options'
ok 52 [ 12/ 906] < 155> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config !rsync-options'
ok 53 [ 19/ 951] < 156> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config sync-xattrs false'
ok 54 [ 82/ 387] < 161> '! gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 resume'
ok 55 [ 16/ 617] < 162> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 resume force'
ok 56 [ 13/ 843] < 165> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 pause force'
ok 57 [ 94/ 1378] < 168> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 resume force'
ok 58 [ 25/ 1517] < 171> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 stop force'
ok 59 [ 12/ 244] < 174> '! gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 resume'
ok 60 [ 18/ 2] < 185> '! grep slave2=a8693162-277e-4d9a-9e74-f5714cadd291:ssh://127.0.0.1::slave1:7669282b-e731-4036-aba0-276bc162d335 /var/lib/glusterd/vols/master/info'
ok 61 [ 10/ 34] < 188> 'pkill glusterd'
ok 62 [ 13/ 1221] < 189> 'glusterd'
ok 63 [ 11/ 26] < 190> 'pidof glusterd'
ok 64 [ 12/ 2655] < 193> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 start force'
ok 65 [ 17/ 4] < 194> 'grep slave2=a8693162-277e-4d9a-9e74-f5714cadd291:ssh://127.0.0.1::slave1:7669282b-e731-4036-aba0-276bc162d335 /var/lib/glusterd/vols/master/info'
ok 66 [ 16/ 255] < 198> '! gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 delete'
ok 67 [ 15/ 678] < 201> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 stop force'
ok 68 [ 86/ 751] < 202> 'gluster --mode=script --wignore volume geo-replication master root at 127.0.0.1::slave1 delete reset-sync-time'
ok 69 [ 11/ 1416] < 205> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave stop'
ok 70 [ 11/ 438] < 206> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave delete'
ok
All tests successful.
Files=1, Tests=70, 102 wallclock secs ( 0.04 usr 0.00 sys + 4.15 cusr 2.79 csys = 6.98 CPU)
Result: PASS
Logs preserved in tarball 01-georep-glusterd-tests-iteration-1.tar
End of test ./tests/00-geo-rep/01-georep-glusterd-tests.t
================================================================================
================================================================================
[18:47:53] Running tests in file ./tests/00-geo-rep/bug-1600145.t
Timeout set is 600, default 200
./tests/00-geo-rep/bug-1600145.t ..
1..26
ok 1 [ 201/ 1031] < 13> 'glusterd'
ok 2 [ 8/ 18] < 14> 'pidof glusterd'
ok 3 [ 9/ 88] < 31> 'gluster --mode=script --wignore volume create master replica 2 builder209.int.aws.gluster.org:/d/backends/master1 builder209.int.aws.gluster.org:/d/backends/master2'
volume set: success
ok 4 [ 80/ 2461] < 33> 'gluster --mode=script --wignore volume start master'
ok 5 [ 11/ 96] < 36> 'gluster --mode=script --wignore volume create slave replica 2 builder209.int.aws.gluster.org:/d/backends/slave1 builder209.int.aws.gluster.org:/d/backends/slave2'
ok 6 [ 11/ 2117] < 37> 'gluster --mode=script --wignore volume start slave'
ok 7 [ 11/ 121] < 40> 'gluster --mode=script --wignore volume create gluster_shared_storage replica 3 builder209.int.aws.gluster.org:/d/backends/gluster_shared_storage1 builder209.int.aws.gluster.org:/d/backends/gluster_shared_storage2 builder209.int.aws.gluster.org:/d/backends/gluster_shared_storage3'
ok 8 [ 11/ 1080] < 41> 'gluster --mode=script --wignore volume start gluster_shared_storage'
ok 9 [ 21/ 3] < 42> 'mkdir -p /var/run/gluster/shared_storage'
ok 10 [ 25/ 36] < 43> 'glusterfs -s builder209.int.aws.gluster.org --volfile-id gluster_shared_storage /var/run/gluster/shared_storage'
ok 11 [ 33/ 6109] < 50> 'create_georep_session master 127.0.0.1::slave'
ok 12 [ 11/ 326] < 53> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config gluster-command-dir /build/install/sbin'
ok 13 [ 18/ 322] < 56> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config slave-gluster-command-dir /build/install/sbin'
ok 14 [ 10/ 482] < 59> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave config use_meta_volume true'
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
stat: cannot stat ‘/var/lib/glusterd/geo-replication/master_slave_common_secret.pem.pub’: No such file or directory
ok 15 [ 13/ 9538] < 62> '0 check_common_secret_file'
ok 16 [ 14/ 1314] < 66> '0 check_keys_distributed'
ok 17 [ 47/ 748] < 73> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave start'
ok 18 [ 14/ 7858] < 75> '1 check_status_num_rows Active'
ok 19 [ 9/ 389] < 76> '1 check_status_num_rows Passive'
ok 20 [ 45/ 1] < 82> '[ 4 -eq 4 ]'
ffff902e06cd6400: 00000002 00000000 00010000 0001 01 438252400 /var/run/gluster/changelog-e12f431cbc109260.sock
ffff902e362b8800: 00000002 00000000 00010000 0001 01 438252235 /var/run/gluster/changelog-15fa20753f22ef9e.sock
ffff902e02978c00: 00000003 00000000 00000000 0001 03 438257785 /var/run/gluster/changelog-e12f431cbc109260.sock
ffff902e783c8800: 00000003 00000000 00000000 0001 03 438257762 /var/run/gluster/changelog-15fa20753f22ef9e.sock
ok 21 [ 396/ 1072] < 87> 'kill_brick master builder209.int.aws.gluster.org /d/backends/master1'
ok 22 [ 10/ 3691] < 89> '1 check_status_num_rows Faulty'
ok 23 [ 9/ 385] < 90> '1 check_status_num_rows Active'
ffff902e06cd6400: 00000002 00000000 00010000 0001 01 438252400 /var/run/gluster/changelog-e12f431cbc109260.sock
ffff902e02978c00: 00000003 00000000 00000000 0001 03 438257785 /var/run/gluster/changelog-e12f431cbc109260.sock
lrwx------. 1 root root 64 Apr 21 18:48 9 -> socket:[438249806]
lrwx------. 1 root root 64 Apr 21 18:48 8 -> socket:[438248173]
lrwx------. 1 root root 64 Apr 21 18:48 7 -> socket:[438249799]
lrwx------. 1 root root 64 Apr 21 18:48 528 -> socket:[438248244]
lrwx------. 1 root root 64 Apr 21 18:48 4 -> socket:[438249784]
lrwx------. 1 root root 64 Apr 21 18:48 11 -> socket:[438249813]
lrwx------. 1 root root 64 Apr 21 18:48 1064 -> socket:[438257404]
lrwx------. 1 root root 64 Apr 21 18:48 1062 -> socket:[438257785]
lrwx------. 1 root root 64 Apr 21 18:48 1060 -> socket:[438256807]
lrwx------. 1 root root 64 Apr 21 18:48 1056 -> socket:[438256363]
lrwx------. 1 root root 64 Apr 21 18:48 1055 -> socket:[438256354]
lrwx------. 1 root root 64 Apr 21 18:48 1051 -> socket:[438252400]
lrwx------. 1 root root 64 Apr 21 18:48 1046 -> socket:[438250505]
lrwx------. 1 root root 64 Apr 21 18:48 1045 -> socket:[438248409]
ok 24 [ 61/ 1] < 97> '[ 2 -eq 2 ]'
ok 25 [ 9/ 1408] < 100> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave stop'
ok 26 [ 9/ 434] < 103> 'gluster --mode=script --wignore volume geo-replication master 127.0.0.1::slave delete'
ok
All tests successful.
Files=1, Tests=26, 43 wallclock secs ( 0.03 usr 0.00 sys + 1.96 cusr 1.50 csys = 3.49 CPU)
Result: PASS
Logs preserved in tarball bug-1600145-iteration-1.tar
End of test ./tests/00-geo-rep/bug-1600145.t
================================================================================
================================================================================
[18:48:36] Running tests in file ./tests/00-geo-rep/bug-1708603.t
Timeout set is 300, default 200
./tests/00-geo-rep/bug-1708603.t ..
1..13
ok 1 [ 179/ 1028] < 12> 'glusterd'
ok 2 [ 9/ 18] < 13> 'pidof glusterd'
ok 3 [ 9/ 86] < 31> 'gluster --mode=script --wignore volume create master replica 2 builder209.int.aws.gluster.org:/d/backends/master1 builder209.int.aws.gluster.org:/d/backends/master2 builder209.int.aws.gluster.org:/d/backends/master3 builder209.int.aws.gluster.org:/d/backends/master4'
ok 4 [ 13/ 1613] < 32> 'gluster --mode=script --wignore volume start master'
ok 5 [ 11/ 106] < 35> 'gluster --mode=script --wignore volume create slave replica 2 builder209.int.aws.gluster.org:/d/backends/slave1 builder209.int.aws.gluster.org:/d/backends/slave2 builder209.int.aws.gluster.org:/d/backends/slave3 builder209.int.aws.gluster.org:/d/backends/slave4'
ok 6 [ 12/ 1251] < 36> 'gluster --mode=script --wignore volume start slave'
ok 7 [ 53/ 92] < 39> 'glusterfs -s builder209.int.aws.gluster.org --volfile-id master /mnt/glusterfs/0'
ok 8 [ 14/ 72] < 42> 'glusterfs -s builder209.int.aws.gluster.org --volfile-id slave /mnt/glusterfs/1'
ok 9 [ 20/ 6695] < 45> 'create_georep_session master 127.0.0.1::slave'
ok 10 [ 446/ 3] < 48> 'false echo false'
There exists ~15 seconds delay for the option to take effect from stime of the corresponding brick. Please check the log for the time, the option is effective. Proceed (y/n) geo-replication config updated successfully
ok 11 [ 812/ 3] < 50> 'true echo true'
ok 12 [ 10/ 383] < 53> 'gluster volume geo-replication master 127.0.0.1::slave stop'
ok 13 [ 11/ 576] < 56> 'gluster volume geo-replication master 127.0.0.1::slave delete'
ok
All tests successful.
Files=1, Tests=13, 14 wallclock secs ( 0.02 usr 0.00 sys + 0.95 cusr 0.72 csys = 1.69 CPU)
Result: PASS
Logs preserved in tarball bug-1708603-iteration-1.tar
End of test ./tests/00-geo-rep/bug-1708603.t
================================================================================
================================================================================
[18:48:50] Running tests in file ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t
Timeout set is 500, default 200
FATAL: command execution failed
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder209.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:216)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
at com.sun.proxy.$Proxy86.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1147)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1139)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
FATAL: Unable to delete script file /tmp/jenkins3153971763788463531.sh
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 6b515f04:builder209.aws.gluster.org": Remote call on builder209.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:991)
at hudson.FilePath.act(FilePath.java:1069)
at hudson.FilePath.act(FilePath.java:1058)
at hudson.FilePath.delete(FilePath.java:1539)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
Build step 'Execute shell' marked build as failure
ERROR: builder209.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list