[Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #624

jenkins at build.gluster.org jenkins at build.gluster.org
Mon May 13 16:44:18 UTC 2019


See <https://build.gluster.org/job/experimental-periodic/624/display/redirect>

------------------------------------------
[...truncated 575.69 KB...]
./tests/bugs/distribute/bug-1368012.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-1368012.t
================================================================================


================================================================================
[16:08:52] Running tests in file ./tests/bugs/distribute/bug-1389697.t
./tests/bugs/distribute/bug-1389697.t .. 
mkdir: cannot create directory ‘/d/backends’: No space left on device
1..12
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
not ok 1 , LINENUM:9
FAILED COMMAND: launch_cluster 2
not ok 2 , LINENUM:10
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log peer probe 127.1.1.2
not ok 3 Got "0" instead of "1", LINENUM:11
FAILED COMMAND: 1 peer_count
not ok 4 , LINENUM:13
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/b1 127.1.1.1:/d/backends/1/b2 127.1.1.2:/d/backends/2/b3
not ok 5 , LINENUM:14
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume start patchy
not ok 6 , LINENUM:17
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy fix-layout start
not ok 7 , LINENUM:20
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy status
Connection failed. Please check if gluster daemon is operational.
not ok 8 , LINENUM:25
FAILED COMMAND: [ 1 -eq 0 ]
not ok 9 , LINENUM:28
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 start
not ok 10 , LINENUM:29
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 status
ok 11, LINENUM:34
Connection failed. Please check if gluster daemon is operational.
Connection failed. Please check if gluster daemon is operational.
not ok 12 , LINENUM:40
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 stop
mkdir: cannot create directory ‘/d/backends’: No space left on device
Failed 11/12 subtests 

Test Summary Report
-------------------
./tests/bugs/distribute/bug-1389697.t (Wstat: 0 Tests: 12 Failed: 11)
  Failed tests:  1-10, 12
Files=1, Tests=12, 93 wallclock secs ( 0.02 usr  0.01 sys +  2.12 cusr  1.32 csys =  3.47 CPU)
Result: FAIL
Logs preserved in tarball bug-1389697-iteration-1.tar
./tests/bugs/distribute/bug-1389697.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/bugs/distribute/bug-1389697.t .. 
mkdir: cannot create directory ‘/d/backends’: No space left on device
1..12
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
mkdir: cannot create directory ‘/d/backends’: No space left on device
not ok 1 , LINENUM:9
FAILED COMMAND: launch_cluster 2
not ok 2 , LINENUM:10
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log peer probe 127.1.1.2
not ok 3 Got "0" instead of "1", LINENUM:11
FAILED COMMAND: 1 peer_count
not ok 4 , LINENUM:13
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/b1 127.1.1.1:/d/backends/1/b2 127.1.1.2:/d/backends/2/b3
not ok 5 , LINENUM:14
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume start patchy
not ok 6 , LINENUM:17
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy fix-layout start
not ok 7 , LINENUM:20
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume rebalance patchy status
Connection failed. Please check if gluster daemon is operational.
not ok 8 , LINENUM:25
FAILED COMMAND: [ 1 -eq 0 ]
not ok 9 , LINENUM:28
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 start
not ok 10 , LINENUM:29
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 status
ok 11, LINENUM:34
Connection failed. Please check if gluster daemon is operational.
Connection failed. Please check if gluster daemon is operational.
not ok 12 , LINENUM:40
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1389697.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/b3 stop
mkdir: cannot create directory ‘/d/backends’: No space left on device
Failed 11/12 subtests 

Test Summary Report
-------------------
./tests/bugs/distribute/bug-1389697.t (Wstat: 0 Tests: 12 Failed: 11)
  Failed tests:  1-10, 12
Files=1, Tests=12, 91 wallclock secs ( 0.03 usr  0.00 sys +  2.37 cusr  1.39 csys =  3.79 CPU)
Result: FAIL
Logs preserved in tarball bug-1389697-iteration-2.tar
End of test ./tests/bugs/distribute/bug-1389697.t
================================================================================


================================================================================
Skipping bad test file ./tests/bugs/distribute/bug-1543279.t
Reason: bug(s): 000000
================================================================================


================================================================================
[16:11:56] Running tests in file ./tests/bugs/distribute/bug-853258.t
Logs preserved in tarball bug-853258-iteration-1.tar
./tests/bugs/distribute/bug-853258.t timed out after 200 seconds
./tests/bugs/distribute/bug-853258.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-853258-iteration-2.tar
./tests/bugs/distribute/bug-853258.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-853258.t
================================================================================


================================================================================
[16:18:36] Running tests in file ./tests/bugs/distribute/bug-860663.t
Logs preserved in tarball bug-860663-iteration-1.tar
./tests/bugs/distribute/bug-860663.t timed out after 200 seconds
./tests/bugs/distribute/bug-860663.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-860663-iteration-2.tar
./tests/bugs/distribute/bug-860663.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-860663.t
================================================================================


================================================================================
[16:25:16] Running tests in file ./tests/bugs/distribute/bug-862967.t
Logs preserved in tarball bug-862967-iteration-1.tar
./tests/bugs/distribute/bug-862967.t timed out after 200 seconds
./tests/bugs/distribute/bug-862967.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-862967-iteration-2.tar
./tests/bugs/distribute/bug-862967.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-862967.t
================================================================================


================================================================================
[16:31:56] Running tests in file ./tests/bugs/distribute/bug-882278.t
Logs preserved in tarball bug-882278-iteration-1.tar
./tests/bugs/distribute/bug-882278.t timed out after 200 seconds
./tests/bugs/distribute/bug-882278.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball bug-882278-iteration-2.tar
./tests/bugs/distribute/bug-882278.t timed out after 200 seconds
End of test ./tests/bugs/distribute/bug-882278.t
================================================================================


================================================================================
[16:38:38] Running tests in file ./tests/bugs/distribute/bug-884455.t
FATAL: command execution failed
java.io.EOFException
	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:140)
	at hudson.remoting.Command.readFrom(Command.java:126)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder201.aws.gluster.org' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:214)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
	at com.sun.proxy.$Proxy80.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1144)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1136)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
	at hudson.model.Build$BuildExecution.build(Build.java:206)
	at hudson.model.Build$BuildExecution.doRun(Build.java:163)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
	at hudson.model.Run.execute(Run.java:1816)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
	at hudson.model.ResourceController.execute(ResourceController.java:97)
	at hudson.model.Executor.run(Executor.java:429)
FATAL: Unable to delete script file /tmp/jenkins698501376568896035.sh
java.io.EOFException
	at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
	at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
	at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
	at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:140)
	at hudson.remoting.Command.readFrom(Command.java:126)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on builder201.aws.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:950)
	at hudson.FilePath.act(FilePath.java:1069)
	at hudson.FilePath.act(FilePath.java:1058)
	at hudson.FilePath.delete(FilePath.java:1539)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
	at hudson.model.Build$BuildExecution.build(Build.java:206)
	at hudson.model.Build$BuildExecution.doRun(Build.java:163)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
	at hudson.model.Run.execute(Run.java:1816)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
	at hudson.model.ResourceController.execute(ResourceController.java:97)
	at hudson.model.Executor.run(Executor.java:429)
Build step 'Execute shell' marked build as failure
ERROR: builder201.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list