[Gluster-Maintainers] Build failed in Jenkins: centos8-s390-regression #59

jenkins at build.gluster.org jenkins at build.gluster.org
Fri Sep 9 06:56:56 UTC 2022


See <https://build.gluster.org/job/centos8-s390-regression/59/display/redirect>

Changes:


------------------------------------------
[...truncated 1.28 MB...]
======================================== (175 / 832) ========================================
[06:50:56] Running tests in file ./tests/basic/fuse/active-io-graph-switch.t
./tests/basic/fuse/active-io-graph-switch.t .. 
1..32
losetup: /d/dev/loop*: failed to use device: No such device
rm: cannot remove '/mnt/glusterfs/0': Directory not empty
Aborting.

/d/dev could not be deleted, here are the left over items
drwxr-xr-x. 2 root root 4096 Sep  9 06:50 /mnt/glusterfs/0
ls: cannot access '/mnt/glusterfs/0/file58590085.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file109459841.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file11178714.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file11172369.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file37231648.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file37231811.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file44041027.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file18651060.data': No such file or directory
ls: cannot access '/mnt/glusterfs/0/file44024681.data': No such file or directory

Please correct the problem and try again.

ok   1 [   2781/   4039] <  33> 'glusterd'
ok   2 [    148/     64] <  34> 'pidof glusterd'
volume create: patchy: failed: Failed to create brick directory for brick 148.100.84.186:/d/backends/patchy0. Reason : No such file or directory 
not ok   3 [    107/    247] <  35> 'gluster --mode=script --wignore volume create patchy replica 3 148.100.84.186:/d/backends/patchy0 148.100.84.186:/d/backends/patchy1 148.100.84.186:/d/backends/patchy2' -> ''
volume set: failed: Volume patchy does not exist
not ok   4 [    108/    217] <  36> 'gluster --mode=script --wignore volume set patchy flush-behind off' -> ''
volume start: patchy: failed: Volume patchy does not exist
not ok   5 [    109/    596] <  37> 'gluster --mode=script --wignore volume start patchy' -> ''
not ok   6 [    352/   1325] <  38> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-id=/patchy --volfile-server=148.100.84.186 /mnt/glusterfs/0' -> ''
ok   7 [    251/     93] <  39> 'touch /mnt/glusterfs/0/lock'
not ok   8 [    321/   5814] <  41> '101 count_files' -> 'Got "109" instead of "101"'
volume set: failed: Volume patchy does not exist
not ok   9 [    207/    277] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  10 [   3528/    416] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
volume set: failed: Volume patchy does not exist
not ok  11 [   3267/    779] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  12 [   3470/   1364] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
volume set: failed: Volume patchy does not exist
not ok  13 [   3457/    657] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  14 [   3562/    484] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
ok  15 [   3349/    110] <  44> 'rm -f /mnt/glusterfs/0/lock'
not ok  16 [   1526/    299] <  46> '100 count_files' -> 'Got "109" instead of "100"'
ok  17 [    248/    106] <  47> 'rm -f /mnt/glusterfs/0/1 /mnt/glusterfs/0/2 /mnt/glusterfs/0/3 /mnt/glusterfs/0/4 /mnt/glusterfs/0/5 /mnt/glusterfs/0/6 /mnt/glusterfs/0/7 /mnt/glusterfs/0/8 /mnt/glusterfs/0/9 /mnt/glusterfs/0/10 /mnt/glusterfs/0/11 /mnt/glusterfs/0/12 /mnt/glusterfs/0/13 /mnt/glusterfs/0/14 /mnt/glusterfs/0/15 /mnt/glusterfs/0/16 /mnt/glusterfs/0/17 /mnt/glusterfs/0/18 /mnt/glusterfs/0/19 /mnt/glusterfs/0/20 /mnt/glusterfs/0/21 /mnt/glusterfs/0/22 /mnt/glusterfs/0/23 /mnt/glusterfs/0/24 /mnt/glusterfs/0/25 /mnt/glusterfs/0/26 /mnt/glusterfs/0/27 /mnt/glusterfs/0/28 /mnt/glusterfs/0/29 /mnt/glusterfs/0/30 /mnt/glusterfs/0/31 /mnt/glusterfs/0/32 /mnt/glusterfs/0/33 /mnt/glusterfs/0/34 /mnt/glusterfs/0/35 /mnt/glusterfs/0/36 /mnt/glusterfs/0/37 /mnt/glusterfs/0/38 /mnt/glusterfs/0/39 /mnt/glusterfs/0/40 /mnt/glusterfs/0/41 /mnt/glusterfs/0/42 /mnt/glusterfs/0/43 /mnt/glusterfs/0/44 /mnt/glusterfs/0/45 /mnt/glusterfs/0/46 /mnt/glusterfs/0/47 /mnt/glusterfs/0/48 /mnt/glusterfs/0/49 /mnt/glusterfs/0/50 /mnt/glusterfs/0/51 /mnt/glusterfs/0/52 /mnt/glusterfs/0/53 /mnt/glusterfs/0/54 /mnt/glusterfs/0/55 /mnt/glusterfs/0/56 /mnt/glusterfs/0/57 /mnt/glusterfs/0/58 /mnt/glusterfs/0/59 /mnt/glusterfs/0/60 /mnt/glusterfs/0/61 /mnt/glusterfs/0/62 /mnt/glusterfs/0/63 /mnt/glusterfs/0/64 /mnt/glusterfs/0/65 /mnt/glusterfs/0/66 /mnt/glusterfs/0/67 /mnt/glusterfs/0/68 /mnt/glusterfs/0/69 /mnt/glusterfs/0/70 /mnt/glusterfs/0/71 /mnt/glusterfs/0/72 /mnt/glusterfs/0/73 /mnt/glusterfs/0/74 /mnt/glusterfs/0/75 /mnt/glusterfs/0/76 /mnt/glusterfs/0/77 /mnt/glusterfs/0/78 /mnt/glusterfs/0/79 /mnt/glusterfs/0/80 /mnt/glusterfs/0/81 /mnt/glusterfs/0/82 /mnt/glusterfs/0/83 /mnt/glusterfs/0/84 /mnt/glusterfs/0/85 /mnt/glusterfs/0/86 /mnt/glusterfs/0/87 /mnt/glusterfs/0/88 /mnt/glusterfs/0/89 /mnt/glusterfs/0/90 /mnt/glusterfs/0/91 /mnt/glusterfs/0/92 /mnt/glusterfs/0/93 /mnt/glusterfs/0/94 /mnt/glusterfs/0/95 /mnt/glusterfs/0/96 /mnt/glusterfs/0/97 /mnt/glusterfs/0/98 /mnt/glusterfs/0/99 /mnt/glusterfs/0/100'
not ok  18 [    158/     85] <  48> '0 count_files' -> 'Got "9" instead of "0"'
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
umount: /mnt/glusterfs/0: not mounted.
not ok  19 [    104/   5066] <  50> 'Y force_umount /mnt/glusterfs/0' -> 'Got "N" instead of "Y"'
not ok  20 [    180/   1184] <  53> '_GFS --attribute-timeout=0 --entry-timeout=0 --reader-thread-count=10 --volfile-id=/patchy --volfile-server=148.100.84.186 /mnt/glusterfs/0' -> ''
ok  21 [    141/     52] <  54> 'touch /mnt/glusterfs/0/lock'
not ok  22 [    452/   5212] <  56> '101 count_files' -> 'Got "111" instead of "101"'
volume set: failed: Volume patchy does not exist
not ok  23 [    155/    868] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  24 [   3375/    718] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
volume set: failed: Volume patchy does not exist
not ok  25 [   3472/    305] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  26 [   3581/    662] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
volume set: failed: Volume patchy does not exist
not ok  27 [   3436/    553] <  21> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch off' -> ''
volume set: failed: Volume patchy does not exist
not ok  28 [   3358/   1973] <  23> 'gluster --mode=script --wignore volume set patchy performance.stat-prefetch on' -> ''
ok  29 [   3202/     20] <  59> 'rm -f /mnt/glusterfs/0/lock'
not ok  30 [   1087/    226] <  61> '100 count_files' -> 'Got "108" instead of "100"'
ok  31 [    179/     75] <  62> 'rm -f /mnt/glusterfs/0/1 /mnt/glusterfs/0/2 /mnt/glusterfs/0/3 /mnt/glusterfs/0/4 /mnt/glusterfs/0/5 /mnt/glusterfs/0/6 /mnt/glusterfs/0/7 /mnt/glusterfs/0/8 /mnt/glusterfs/0/9 /mnt/glusterfs/0/10 /mnt/glusterfs/0/11 /mnt/glusterfs/0/12 /mnt/glusterfs/0/13 /mnt/glusterfs/0/14 /mnt/glusterfs/0/15 /mnt/glusterfs/0/16 /mnt/glusterfs/0/17 /mnt/glusterfs/0/18 /mnt/glusterfs/0/19 /mnt/glusterfs/0/20 /mnt/glusterfs/0/21 /mnt/glusterfs/0/22 /mnt/glusterfs/0/23 /mnt/glusterfs/0/24 /mnt/glusterfs/0/25 /mnt/glusterfs/0/26 /mnt/glusterfs/0/27 /mnt/glusterfs/0/28 /mnt/glusterfs/0/29 /mnt/glusterfs/0/30 /mnt/glusterfs/0/31 /mnt/glusterfs/0/32 /mnt/glusterfs/0/33 /mnt/glusterfs/0/34 /mnt/glusterfs/0/35 /mnt/glusterfs/0/36 /mnt/glusterfs/0/37 /mnt/glusterfs/0/38 /mnt/glusterfs/0/39 /mnt/glusterfs/0/40 /mnt/glusterfs/0/41 /mnt/glusterfs/0/42 /mnt/glusterfs/0/43 /mnt/glusterfs/0/44 /mnt/glusterfs/0/45 /mnt/glusterfs/0/46 /mnt/glusterfs/0/47 /mnt/glusterfs/0/48 /mnt/glusterfs/0/49 /mnt/glusterfs/0/50 /mnt/glusterfs/0/51 /mnt/glusterfs/0/52 /mnt/glusterfs/0/53 /mnt/glusterfs/0/54 /mnt/glusterfs/0/55 /mnt/glusterfs/0/56 /mnt/glusterfs/0/57 /mnt/glusterfs/0/58 /mnt/glusterfs/0/59 /mnt/glusterfs/0/60 /mnt/glusterfs/0/61 /mnt/glusterfs/0/62 /mnt/glusterfs/0/63 /mnt/glusterfs/0/64 /mnt/glusterfs/0/65 /mnt/glusterfs/0/66 /mnt/glusterfs/0/67 /mnt/glusterfs/0/68 /mnt/glusterfs/0/69 /mnt/glusterfs/0/70 /mnt/glusterfs/0/71 /mnt/glusterfs/0/72 /mnt/glusterfs/0/73 /mnt/glusterfs/0/74 /mnt/glusterfs/0/75 /mnt/glusterfs/0/76 /mnt/glusterfs/0/77 /mnt/glusterfs/0/78 /mnt/glusterfs/0/79 /mnt/glusterfs/0/80 /mnt/glusterfs/0/81 /mnt/glusterfs/0/82 /mnt/glusterfs/0/83 /mnt/glusterfs/0/84 /mnt/glusterfs/0/85 /mnt/glusterfs/0/86 /mnt/glusterfs/0/87 /mnt/glusterfs/0/88 /mnt/glusterfs/0/89 /mnt/glusterfs/0/90 /mnt/glusterfs/0/91 /mnt/glusterfs/0/92 /mnt/glusterfs/0/93 /mnt/glusterfs/0/94 /mnt/glusterfs/0/95 /mnt/glusterfs/0/96 /mnt/glusterfs/0/97 /mnt/glusterfs/0/98 /mnt/glusterfs/0/99 /mnt/glusterfs/0/100'
ok  32 [    103/     89] <  63> '0 count_files'
losetup: /d/dev/loop*: failed to use device: No such device
Failed 23/32 subtests 

Test Summary Report
-------------------
./tests/basic/fuse/active-io-graph-switch.t (Wstat: 0 Tests: 32 Failed: 23)
  Failed tests:  3-6, 8-14, 16, 18-20, 22-28, 30
Files=1, Tests=32, 87 wallclock secs ( 0.02 usr  0.00 sys +  2.02 cusr  7.63 csys =  9.67 CPU)
Result: FAIL
Logs preserved in tarball active-io-graph-switch-iteration-1.tar.gz
./tests/basic/fuse/active-io-graph-switch.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

Logs preserved in tarball active-io-graph-switch-iteration-2.tar.gz
./tests/basic/fuse/active-io-graph-switch.t timed out after 200 seconds
End of test ./tests/basic/fuse/active-io-graph-switch.t
================================================================================


======================================== (176 / 832) ========================================
[06:55:45] Running tests in file ./tests/basic/geo-replication/marker-xattrs.t
FATAL: command execution failed
java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1491)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:923)
	at hudson.slaves.SlaveComputer.kill(SlaveComputer.java:889)
	at hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:95)
	at jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3705)
	at hudson.model.Queue._withLock(Queue.java:1395)
	at hudson.model.Queue.withLock(Queue.java:1269)
	at jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3701)
	at jenkins.model.Jenkins.cleanUp(Jenkins.java:3582)
	at hudson.WebAppMain.contextDestroyed(WebAppMain.java:374)
	at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1080)
	at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
	at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1043)
	at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:430)
	at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1066)
	at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
	at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
	at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
	at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1120)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
	at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.Server.doStop(Server.java:470)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at winstone.Launcher.shutdown(Launcher.java:354)
	at winstone.ShutdownHook.run(ShutdownHook.java:26)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 37243a07:builder-el8-s390x-1.ibm-l1.gluster.org": Remote call on builder-el8-s390x-1.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
	at com.sun.proxy.$Proxy88.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1215)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1207)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:816)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:524)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
FATAL: Unable to delete script file /tmp/jenkins2064552280624449527.sh
java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1491)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:923)
	at hudson.slaves.SlaveComputer.kill(SlaveComputer.java:889)
	at hudson.model.AbstractCIBase.killComputer(AbstractCIBase.java:95)
	at jenkins.model.Jenkins.lambda$_cleanUpDisconnectComputers$11(Jenkins.java:3705)
	at hudson.model.Queue._withLock(Queue.java:1395)
	at hudson.model.Queue.withLock(Queue.java:1269)
	at jenkins.model.Jenkins._cleanUpDisconnectComputers(Jenkins.java:3701)
	at jenkins.model.Jenkins.cleanUp(Jenkins.java:3582)
	at hudson.WebAppMain.contextDestroyed(WebAppMain.java:374)
	at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1080)
	at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
	at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1043)
	at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:430)
	at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1066)
	at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
	at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
	at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
	at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1120)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
	at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
	at org.eclipse.jetty.server.Server.doStop(Server.java:470)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
	at winstone.Launcher.shutdown(Launcher.java:354)
	at winstone.ShutdownHook.run(ShutdownHook.java:26)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 37243a07:builder-el8-s390x-1.ibm-l1.gluster.org": Remote call on builder-el8-s390x-1.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.FilePath.act(FilePath.java:1194)
	at hudson.FilePath.act(FilePath.java:1183)
	at hudson.FilePath.delete(FilePath.java:1730)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:816)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:524)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: null
java.lang.NullPointerException
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.tempDir(UnbindableDir.java:67)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:62)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:23)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:84)
	at org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:111)
	at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:564)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:528)
	at hudson.model.Run.execute(Run.java:1897)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
ERROR: builder-el8-s390x-1.ibm-l1.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list