[Gluster-Maintainers] Build failed in Jenkins: centos8-s390-regression #58

jenkins at build.gluster.org jenkins at build.gluster.org
Thu Sep 8 08:39:28 UTC 2022


See <https://build.gluster.org/job/centos8-s390-regression/58/display/redirect>

Changes:


------------------------------------------
[...truncated 1.55 MB...]
not ok  29 [    223/    398] <  79> 'gluster --mode=script --wignore volume stop patchy' -> ''
Volume patchy does not exist
not ok  30 [    206/   1061] <  80> 'Stopped volinfo_field patchy Status' -> 'Got "" instead of "Stopped"'
volume delete: patchy: failed: Volume patchy does not exist
not ok  31 [    269/    374] <  82> 'gluster --mode=script --wignore volume delete patchy' -> ''
ok  32 [    160/    401] <  83> '! gluster --mode=script --wignore volume info patchy'
losetup: /d/dev/loop*: failed to use device: No such device
Failed 23/32 subtests 

Test Summary Report
-------------------
./tests/basic/mount.t (Wstat: 0 Tests: 32 Failed: 23)
  Failed tests:  4-18, 22-24, 26, 28-31
Files=1, Tests=32, 70 wallclock secs ( 0.02 usr  0.01 sys +  1.12 cusr  1.46 csys =  2.61 CPU)
Result: FAIL
Logs preserved in tarball mount-iteration-2.tar.gz
End of test ./tests/basic/mount.t
================================================================================


======================================== (227 / 832) ========================================
[08:28:59] Running tests in file ./tests/basic/mpx-compat.t
./tests/basic/mpx-compat.t .. 
1..12
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2541/   4446] <  24> 'glusterd'
ok   2 [    187/    223] <  25> 'gluster --mode=script --wignore volume set all cluster.brick-multiplex yes'
ok   3 [    106/   1290] <  28> 'gluster --mode=script --wignore volume create patchy 148.100.84.186:/d/backends/brick-patchy-0 148.100.84.186:/d/backends/brick-patchy-1'
ok   4 [    228/   1322] <  29> 'gluster --mode=script --wignore volume create patchy1 148.100.84.186:/d/backends/brick-patchy1-0 148.100.84.186:/d/backends/brick-patchy1-1'
volume set: success
volume start: patchy: failed: Commit failed on localhost. Please check log file for details.
not ok   5 [   1829/   1706] <  35> 'gluster --mode=script --wignore volume start patchy' -> ''
volume start: patchy1: failed: Commit failed on localhost. Please check log file for details.
not ok   6 [    223/   1727] <  36> 'gluster --mode=script --wignore volume start patchy1' -> ''
not ok   7 [  45227/     74] <  42> '1 count_processes' -> 'Got "0" instead of "1"'
not ok   8 [    185/    746] <  43> '1 count_brick_pids' -> 'Got "0" instead of "1"'
volume stop: patchy1: failed: Volume patchy1 is not in the started state
not ok   9 [    397/    352] <  46> 'gluster --mode=script --wignore volume stop patchy1' -> ''
ok  10 [    158/    286] <  47> 'gluster --mode=script --wignore volume set patchy1 server.manage-gids no'
volume start: patchy1: failed: Commit failed on localhost. Please check log file for details.
not ok  11 [    273/   1742] <  48> 'gluster --mode=script --wignore volume start patchy1' -> ''
not ok  12 [    274/  45323] <  51> '2 count_processes' -> 'Got "0" instead of "2"'
losetup: /d/dev/loop*: failed to use device: No such device
Failed 7/12 subtests 

Test Summary Report
-------------------
./tests/basic/mpx-compat.t (Wstat: 0 Tests: 12 Failed: 7)
  Failed tests:  5-9, 11-12
Files=1, Tests=12, 115 wallclock secs ( 0.02 usr  0.00 sys +  0.94 cusr  1.51 csys =  2.47 CPU)
Result: FAIL
Logs preserved in tarball mpx-compat-iteration-1.tar.gz
./tests/basic/mpx-compat.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

./tests/basic/mpx-compat.t .. 
1..12
losetup: /d/dev/loop*: failed to use device: No such device
ok   1 [   2893/   5661] <  24> 'glusterd'
ok   2 [    325/    335] <  25> 'gluster --mode=script --wignore volume set all cluster.brick-multiplex yes'
ok   3 [    110/    369] <  28> 'gluster --mode=script --wignore volume create patchy 148.100.84.186:/d/backends/brick-patchy-0 148.100.84.186:/d/backends/brick-patchy-1'
ok   4 [    599/    704] <  29> 'gluster --mode=script --wignore volume create patchy1 148.100.84.186:/d/backends/brick-patchy1-0 148.100.84.186:/d/backends/brick-patchy1-1'
volume set: success
volume start: patchy: failed: Commit failed on localhost. Please check log file for details.
not ok   5 [   1671/    566] <  35> 'gluster --mode=script --wignore volume start patchy' -> ''
volume start: patchy1: failed: Commit failed on localhost. Please check log file for details.
not ok   6 [    122/   1642] <  36> 'gluster --mode=script --wignore volume start patchy1' -> ''
not ok   7 [  45428/    229] <  42> '1 count_processes' -> 'Got "0" instead of "1"'
not ok   8 [    193/   3847] <  43> '1 count_brick_pids' -> 'Got "0" instead of "1"'
volume stop: patchy1: failed: Volume patchy1 is not in the started state
not ok   9 [    475/    368] <  46> 'gluster --mode=script --wignore volume stop patchy1' -> ''
ok  10 [    724/   2307] <  47> 'gluster --mode=script --wignore volume set patchy1 server.manage-gids no'
volume start: patchy1: failed: Commit failed on localhost. Please check log file for details.
not ok  11 [    139/    381] <  48> 'gluster --mode=script --wignore volume start patchy1' -> ''
not ok  12 [    117/  45358] <  51> '2 count_processes' -> 'Got "0" instead of "2"'
losetup: /d/dev/loop*: failed to use device: No such device
Failed 7/12 subtests 

Test Summary Report
-------------------
./tests/basic/mpx-compat.t (Wstat: 0 Tests: 12 Failed: 7)
  Failed tests:  5-9, 11-12
Files=1, Tests=12, 119 wallclock secs ( 0.01 usr  0.00 sys +  0.94 cusr  1.54 csys =  2.49 CPU)
Result: FAIL
Logs preserved in tarball mpx-compat-iteration-2.tar.gz
End of test ./tests/basic/mpx-compat.t
================================================================================


======================================== (228 / 832) ========================================
[08:32:55] Running tests in file ./tests/basic/multiple-volume-shd-mux.t
Logs preserved in tarball multiple-volume-shd-mux-iteration-1.tar.gz
./tests/basic/multiple-volume-shd-mux.t timed out after 200 seconds
./tests/basic/multiple-volume-shd-mux.t: bad status 124

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

FATAL: command execution failed
java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1491)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:923)
	at hudson.slaves.SlaveComputer.kill(SlaveComputer.java:889)
	at hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:95)
	at jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3705)
	at hudson.model.Queue._withLock(Queue.java:1395)
	at hudson.model.Queue.withLock(Queue.java:1269)
	at jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3701)
	at jenkins.model.Jenkins.cleanUp(Jenkins.java:3582)
	at hudson.WebAppMain.contextDestroyed(WebAppMain.java:374)
	at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1080)
	at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
	at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1043)
	at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:430)
	at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1066)
	at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
	at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
	at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
	at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1120)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
	at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.Server.doStop(Server.java:470)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at winstone.Launcher.shutdown(Launcher.java:354)
	at winstone.ShutdownHook.run(ShutdownHook.java:26)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 1a70de13:builder-el8-s390x-1.ibm-l1.gluster.org": Remote call on builder-el8-s390x-1.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
	at com.sun.proxy.$Proxy87.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1215)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1207)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:816)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:524)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
FATAL: Unable to delete script file /tmp/jenkins4282702585295610611.sh
java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1491)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:923)
	at hudson.slaves.SlaveComputer.kill(SlaveComputer.java:889)
	at hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:95)
	at jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3705)
	at hudson.model.Queue._withLock(Queue.java:1395)
	at hudson.model.Queue.withLock(Queue.java:1269)
	at jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3701)
	at jenkins.model.Jenkins.cleanUp(Jenkins.java:3582)
	at hudson.WebAppMain.contextDestroyed(WebAppMain.java:374)
	at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1080)
	at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
	at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1043)
	at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:430)
	at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1066)
	at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
	at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
	at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
	at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1120)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
	at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.Server.doStop(Server.java:470)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at winstone.Launcher.shutdown(Launcher.java:354)
	at winstone.ShutdownHook.run(ShutdownHook.java:26)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 1a70de13:builder-el8-s390x-1.ibm-l1.gluster.org": Remote call on builder-el8-s390x-1.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.FilePath.act(FilePath.java:1194)
	at hudson.FilePath.act(FilePath.java:1183)
	at hudson.FilePath.delete(FilePath.java:1730)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:816)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:524)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: null
java.lang.NullPointerException
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.tempDir(UnbindableDir.java:67)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:62)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:23)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:84)
	at org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:111)
	at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:564)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:528)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
ERROR: builder-el8-s390x-1.ibm-l1.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list