[Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #598

jenkins at build.gluster.org jenkins at build.gluster.org
Wed Feb 13 16:27:18 UTC 2019


See <https://build.gluster.org/job/experimental-periodic/598/display/redirect>

------------------------------------------
[...truncated 570.62 KB...]
[15:46:58] Running tests in file ./tests/bugs/distribute/bug-1193636.t
Logs preserved in tarball bug-1193636-iteration-1.tar
./tests/bugs/distribute/bug-1193636.t timed out after 200 seconds
./tests/bugs/distribute/bug-1193636.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-1193636-iteration-2.tar
./tests/bugs/distribute/bug-1193636.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-1193636.t
================================================================================


================================================================================
[15:53:38] Running tests in file ./tests/bugs/distribute/bug-1204140.t
Logs preserved in tarball bug-1204140-iteration-1.tar
./tests/bugs/distribute/bug-1204140.t timed out after 200 seconds
./tests/bugs/distribute/bug-1204140.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-1204140-iteration-2.tar
./tests/bugs/distribute/bug-1204140.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-1204140.t
================================================================================


================================================================================
Skipping bad test file ./tests/bugs/distribute/bug-1247563.t
Reason: bug(s): 000000
================================================================================


================================================================================
[16:00:18] Running tests in file ./tests/bugs/distribute/bug-1368012.t
Logs preserved in tarball bug-1368012-iteration-1.tar
./tests/bugs/distribute/bug-1368012.t timed out after 200 seconds
./tests/bugs/distribute/bug-1368012.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-1368012-iteration-2.tar
./tests/bugs/distribute/bug-1368012.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-1368012.t
================================================================================


================================================================================
[16:06:59] Running tests in file ./tests/bugs/distribute/bug-1389697.t
./tests/bugs/distribute/bug-1389697.t .. 
mkdir: cannot create directory ‘/d/backends’: No space left on device
1..12
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
not ok 1 , LINENUM:9
FAILED COMMAND: launch_cluster 2
not ok 2 , LINENUM:10
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log peer probe 127.1.1.2
not ok 3 Got "0" instead of "1", LINENUM:11
FAILED COMMAND: 1 peer_count
not ok 4 , LINENUM:13
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/b1 127.1.1.1:/d/backends/1/b2 127.1.1.2:/d/backends/2/b3
not ok 5 , LINENUM:14
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume start patchy
not ok 6 , LINENUM:17
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy fix-layout start
not ok 7 , LINENUM:20
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy status
Connection failed. Please check if gluster daemon is operational.
not ok 8 , LINENUM:25
FAILED COMMAND: [ 1 -eq 0 ]
not ok 9 , LINENUM:28
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 start
not ok 10 , LINENUM:29
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 status
ok 11, LINENUM:34
Connection failed. Please check if gluster daemon is operational.
Connection failed. Please check if gluster daemon is operational.
not ok 12 , LINENUM:40
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 stop
mkdir: cannot create directory ‘/d/backends’: No space left on device
Failed 11/12 subtests 

Test Summary Report
-------------------
./tests/bugs/distribute/bug-1389697.t (Wstat: 0 Tests: 12 Failed: 11)
  Failed tests:  1-10, 12
Files=1, Tests=12, 93 wallclock secs ( 0.02 usr  0.00 sys +  1.68 cusr  1.02 csys =  2.72 CPU)
Result: FAIL
Logs preserved in tarball bug-1389697-iteration-1.tar
./tests/bugs/distribute/bug-1389697.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/bugs/distribute/bug-1389697.t .. 
mkdir: cannot create directory ‘/d/backends’: No space left on device
1..12
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
not ok 1 , LINENUM:9
FAILED COMMAND: launch_cluster 2
not ok 2 , LINENUM:10
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log peer probe 127.1.1.2
not ok 3 Got "0" instead of "1", LINENUM:11
FAILED COMMAND: 1 peer_count
not ok 4 , LINENUM:13
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/b1 127.1.1.1:/d/backends/1/b2 127.1.1.2:/d/backends/2/b3
not ok 5 , LINENUM:14
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume start patchy
not ok 6 , LINENUM:17
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy fix-layout start
not ok 7 , LINENUM:20
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy status
Connection failed. Please check if gluster daemon is operational.
not ok 8 , LINENUM:25
FAILED COMMAND: [ 1 -eq 0 ]
not ok 9 , LINENUM:28
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 start
not ok 10 , LINENUM:29
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 status
ok 11, LINENUM:34
Connection failed. Please check if gluster daemon is operational.
Connection failed. Please check if gluster daemon is operational.
not ok 12 , LINENUM:40
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 stop
mkdir: cannot create directory ‘/d/backends’: No space left on device
Failed 11/12 subtests 

Test Summary Report
-------------------
./tests/bugs/distribute/bug-1389697.t (Wstat: 0 Tests: 12 Failed: 11)
  Failed tests:  1-10, 12
Files=1, Tests=12, 91 wallclock secs ( 0.02 usr  0.01 sys +  2.37 cusr  1.37 csys =  3.77 CPU)
Result: FAIL
Logs preserved in tarball bug-1389697-iteration-2.tar
End of test ./tests/bugs/distribute/bug-1389697.t
================================================================================


================================================================================
Skipping bad test file ./tests/bugs/distribute/bug-1543279.t
Reason: bug(s): 000000
================================================================================


================================================================================
[16:10:04] Running tests in file ./tests/bugs/distribute/bug-853258.t
Logs preserved in tarball bug-853258-iteration-1.tar
./tests/bugs/distribute/bug-853258.t timed out after 200 seconds
./tests/bugs/distribute/bug-853258.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-853258-iteration-2.tar
./tests/bugs/distribute/bug-853258.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-853258.t
================================================================================


================================================================================
[16:16:44] Running tests in file ./tests/bugs/distribute/bug-860663.t
FATAL: command execution failed
java.io.EOFException
	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:140)
	at hudson.remoting.Command.readFrom(Command.java:126)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder204.aws.gluster.org' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:214)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
	at com.sun.proxy.$Proxy78.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1144)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1136)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
	at hudson.model.Build$BuildExecution.build(Build.java:206)
	at hudson.model.Build$BuildExecution.doRun(Build.java:163)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
	at hudson.model.Run.execute(Run.java:1810)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
	at hudson.model.ResourceController.execute(ResourceController.java:97)
	at hudson.model.Executor.run(Executor.java:429)
FATAL: Unable to delete script file /tmp/jenkins3293913506217544639.sh
java.io.EOFException
	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:140)
	at hudson.remoting.Command.readFrom(Command.java:126)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on builder204.aws.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:948)
	at hudson.FilePath.act(FilePath.java:1072)
	at hudson.FilePath.act(FilePath.java:1061)
	at hudson.FilePath.delete(FilePath.java:1565)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
	at hudson.model.Build$BuildExecution.build(Build.java:206)
	at hudson.model.Build$BuildExecution.doRun(Build.java:163)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
	at hudson.model.Run.execute(Run.java:1810)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
	at hudson.model.ResourceController.execute(ResourceController.java:97)
	at hudson.model.Executor.run(Executor.java:429)
Build step 'Execute shell' marked build as failure
ERROR: builder204.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list