[Gluster-Maintainers] Build failed in Jenkins: centos8-s390-regression #700

jenkins at build.gluster.org jenkins at build.gluster.org
Wed Jan 15 01:31:29 UTC 2025


See <https://build.gluster.org/job/centos8-s390-regression/700/display/redirect>

Changes:


------------------------------------------
[...truncated 5.32 MiB...]
volume remove-brick status: failed: Glusterd Syncop Mgmt brick op 'Rebalance' failed. Please check brick log file for details.
ok  32 [     41/    924] < 114> 'completed remove_brick_status_completed_field patchy 148.100.84.23:/d/backends/patchy5'
ok  33 [     14/    430] < 115> 'completed remove_brick_status_completed_field patchy 148.100.84.23:/d/backends/patchy4'
ok  34 [     23/   2174] < 116> 'gluster --mode=script --wignore volume remove-brick patchy 148.100.84.23:/d/backends/patchy4 148.100.84.23:/d/backends/patchy5 commit'
ok  35 [     27/   1271] < 117> 'gluster --mode=script --wignore volume remove-brick patchy replica 1 148.100.84.23:/d/backends/patchy2 force'
ok
All tests successful.
Files=1, Tests=35, 54 wallclock secs ( 0.03 usr  0.00 sys +  2.06 cusr  1.52 csys =  3.61 CPU)
Result: PASS
Logs preserved in tarball remove-brick-testcases-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/remove-brick-testcases.t
================================================================================


======================================== (467 / 839) ========================================
[01:19:10] Running tests in file ./tests/bugs/glusterd/remove-brick-validation.t
./tests/bugs/glusterd/remove-brick-validation.t .. 
1..25
ok   1 [   1082/   8163] <  14> 'launch_cluster 3'
ok   2 [     35/   1148] <  17> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log peer probe 127.1.1.2'
ok   3 [     40/   1465] <  19> '1 peer_count 1'
ok   4 [     51/    792] <  23> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log peer probe 127.1.1.3'
ok   5 [     33/    477] <  24> '2 peer_count 1'
ok   6 [     15/   3031] <  26> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/patchy 127.1.1.2:/d/backends/2/patchy'
ok   7 [     41/   3904] <  27> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume start patchy'
ok   8 [     34/    135] <  32> '! gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/patchy start'
ok   9 [     70/   3048] <  34> 'glusterd --xlator-option management.working-directory=/d/backends/2/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.2 --xlator-option management.run-directory=/d/backends/2/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/2/glusterd/gd.sock --xlator-option management.logging-directory=/var/log/glusterfs/2 --log-file=/var/log/glusterfs/2/remove-brick-validation.t_glusterd2.log --pid-file=/d/backends/2/glusterd.pid'
ok  10 [    252/   1373] <  35> '1 cluster_brick_up_status 1 patchy 127.1.1.2 /d/backends/2/patchy'
ok  11 [     57/    157] <  37> '2 peer_count 1'
ok  12 [     30/    132] <  40> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli2.log volume status'
ok  13 [     89/    422] <  42> '2 peer_count 3'
ok  14 [     96/   6973] <  43> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/patchy start'
ok  15 [     36/     80] <  47> '! gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/patchy commit'
ok  16 [     77/   2639] <  49> 'glusterd --xlator-option management.working-directory=/d/backends/2/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.2 --xlator-option management.run-directory=/d/backends/2/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/2/glusterd/gd.sock --xlator-option management.logging-directory=/var/log/glusterfs/2 --log-file=/var/log/glusterfs/2/remove-brick-validation.t_glusterd2.log --pid-file=/d/backends/2/glusterd.pid'
ok  17 [     38/   7504] <  50> '1 cluster_brick_up_status 1 patchy 127.1.1.2 /d/backends/2/patchy'
ok  18 [     39/    108] <  52> '2 peer_count 1'
ok  19 [     38/    218] <  55> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli2.log volume status'
ok  20 [    343/   1550] <  57> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/patchy stop'
ok  21 [     24/    168] <  60> '1 peer_count 1'
ok  22 [     49/   5983] <  62> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli1.log volume remove-brick patchy 127.1.1.2:/d/backends/2/patchy start'
ok  23 [     23/   2059] <  64> 'start_glusterd 3'
ok  24 [     17/    190] <  65> '2 peer_count 1'
ok  25 [     75/   2978] <  66> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/3/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-validation.t_cli3.log volume status'
ok
All tests successful.
Files=1, Tests=25, 58 wallclock secs ( 0.01 usr  0.00 sys +  1.34 cusr  1.42 csys =  2.77 CPU)
Result: PASS
Logs preserved in tarball remove-brick-validation-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/remove-brick-validation.t
================================================================================


======================================== (468 / 839) ========================================
[01:20:09] Running tests in file ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t .. 
1..22
ok   1 [    781/   2134] <   9> 'glusterd'
ok   2 [     30/    165] <  10> 'pidof glusterd'
No volumes present
ok   3 [    323/    244] <  11> 'gluster --mode=script --wignore volume info'
ok   4 [     21/    415] <  14> 'gluster --mode=script --wignore volume create patchy replica 2 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 148.100.84.23:/d/backends/patchy4 148.100.84.23:/d/backends/patchy5 148.100.84.23:/d/backends/patchy6'
ok   5 [     34/   4186] <  15> 'gluster --mode=script --wignore volume start patchy'
ok   6 [    414/     49] <  19> 'glusterfs -s 148.100.84.23 --volfile-id patchy /mnt/glusterfs/0'
ok   7 [     32/   2567] <  20> 'touch /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file2 /mnt/glusterfs/0/file3 /mnt/glusterfs/0/file4 /mnt/glusterfs/0/file5 /mnt/glusterfs/0/file6 /mnt/glusterfs/0/file7 /mnt/glusterfs/0/file8 /mnt/glusterfs/0/file9 /mnt/glusterfs/0/file10'
ok   8 [     37/   5533] <  29> 'success remove_brick_start_status'
ok   9 [     25/   2030] <  32> 'completed remove_brick_status_completed_field patchy 148.100.84.23:/d/backends/patchy6 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy5'
ok  10 [     34/   4268] <  40> 'success remove_brick_commit_status'
ok  11 [    244/     10] <  44> 'Replicate echo Replicate'
ok  12 [     37/     74] <  45> 'Y force_umount /mnt/glusterfs/0'
ok  13 [     27/    628] <  50> 'gluster --mode=script --wignore volume create patchy1 replica 3 148.100.84.23:/d/backends/patchy10 148.100.84.23:/d/backends/patchy11 148.100.84.23:/d/backends/patchy12 148.100.84.23:/d/backends/patchy13 148.100.84.23:/d/backends/patchy14 148.100.84.23:/d/backends/patchy15 148.100.84.23:/d/backends/patchy16 148.100.84.23:/d/backends/patchy17 148.100.84.23:/d/backends/patchy18'
ok  14 [     27/   3080] <  51> 'gluster --mode=script --wignore volume start patchy1'
ok  15 [     32/    115] <  52> '9 brick_count patchy1'
ok  16 [     33/     41] <  55> 'glusterfs -s 148.100.84.23 --volfile-id patchy1 /mnt/glusterfs/0'
ok  17 [     32/    383] <  56> 'touch /mnt/glusterfs/0/zerobytefile.txt'
ok  18 [     32/   1430] <  57> 'mkdir /mnt/glusterfs/0/test_dir'
ok  19 [     48/    303] <  58> 'dd if=/dev/zero of=/mnt/glusterfs/0/file bs=1024 count=1024'
ok  20 [     33/    110] <  71> 'failed remove_brick_start'
ok  21 [    206/   4388] <  76> 'success remove_brick'
ok  22 [     40/    104] <  78> 'Y force_umount /mnt/glusterfs/0'
ok
All tests successful.
Files=1, Tests=22, 36 wallclock secs ( 0.02 usr  0.00 sys +  0.92 cusr  0.98 csys =  1.92 CPU)
Result: PASS
Logs preserved in tarball removing-multiple-bricks-in-single-remove-brick-command-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
================================================================================


======================================== (469 / 839) ========================================
[01:20:46] Running tests in file ./tests/bugs/glusterd/replace-brick-operations.t
./tests/bugs/glusterd/replace-brick-operations.t .. 
1..15
ok   1 [    881/   2114] <  11> 'glusterd'
ok   2 [     37/     21] <  12> 'pidof glusterd'
ok   3 [     38/    251] <  15> 'gluster --mode=script --wignore volume create patchy replica 2 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy2'
ok   4 [     43/   3501] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     24/     77] <  24> '! gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 start'
ok   6 [     37/     83] <  25> '! gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 status'
ok   7 [     38/    425] <  26> '! gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 abort'
ok   8 [     49/   3254] <  30> 'gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 commit force'
ok   9 [     32/     39] <  34> 'glusterfs --volfile-id=patchy --volfile-server=148.100.84.23 /mnt/glusterfs/0'
ok  10 [     33/   3860] <  36> '1 afr_child_up_status patchy 1'
ok  11 [     51/   2797] <  39> 'gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy1_new commit force'
ok  12 [     52/    191] <  41> '1 afr_child_up_status patchy 1'
ok  13 [     34/   1268] <  43> 'kill_brick patchy 148.100.84.23 /d/backends/patchy1_new'
ok  14 [    176/   1813] <  46> 'gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy1_new 148.100.84.23:/d/backends/patchy1_newer commit force'
ok  15 [     33/    130] <  48> '1 afr_child_up_status patchy 1'
ok
All tests successful.
Files=1, Tests=15, 22 wallclock secs ( 0.02 usr  0.00 sys +  0.69 cusr  1.48 csys =  2.19 CPU)
Result: PASS
Logs preserved in tarball replace-brick-operations-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/replace-brick-operations.t
================================================================================


======================================== (470 / 839) ========================================
[01:21:10] Running tests in file ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t .. 
1..17
ok   1 [   1172/  11760] <  24> 'launch_cluster 3'
ok   2 [     36/   1391] <  25> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.2'
ok   3 [     31/    706] <  26> '1 check_peers'
ok   4 [     26/   1522] <  28> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume create patchy replica 2 127.1.1.1:/d/backends/patchy 127.1.1.2:/d/backends/patchy'
ok   5 [     23/   3531] <  29> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume start patchy'
ok   6 [     18/    112] <  33> '! gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok   7 [     33/   1117] <  35> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy start'
ok   8 [     29/   2876] <  37> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok   9 [     32/    470] <  41> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.3'
ok  10 [     26/   1897] <  42> '2 peer_count'
ok  11 [     33/    595] <  44> '1 cluster_brick_up_status 1 patchy 127.1.1.1 /d/backends/patchy'
ok  12 [     19/    125] <  45> '1 cluster_brick_up_status 1 patchy 127.1.1.2 /d/backends/patchy'
ok  13 [     22/    117] <  46> 'Y shd_up_status_1'
ok  14 [     22/    421] <  47> 'Y shd_up_status_2'
ok  15 [    694/     13] <  53> 'kill_glusterd 1'
ok  16 [     37/   2525] <  56> 'glusterd --xlator-option management.working-directory=/d/backends/1/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.1 --xlator-option management.run-directory=/d/backends/1/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/1/glusterd/gd.sock --xlator-option management.logging-directory=/var/log/glusterfs/1 --log-file=/var/log/glusterfs/1/reset-brick-and-daemons-follow-quorum.t_glusterd1.log --pid-file=/d/backends/1/glusterd.pid'
ok  17 [     23/   3302] <  61> 'Y shd_up_status_2'
ok
All tests successful.
Files=1, Tests=17, 35 wallclock secs ( 0.02 usr  0.00 sys +  1.14 cusr  1.10 csys =  2.26 CPU)
Result: PASS
Logs preserved in tarball reset-brick-and-daemons-follow-quorum-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
================================================================================


======================================== (471 / 839) ========================================
[01:21:46] Running tests in file ./tests/bugs/glusterd/reset-rebalance-state.t
./tests/bugs/glusterd/reset-rebalance-state.t .. 
1..13
ok   1 [   1185/   2489] <  34> 'glusterd'
ok   2 [     65/     78] <  35> 'pidof glusterd'
No volumes present
ok   3 [     58/    129] <  37> 'gluster --mode=script --wignore volume info'
ok   4 [     42/    433] <  38> 'gluster --mode=script --wignore volume create patchy replica 3 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy3 148.100.84.23:/d/backends/patchy4 148.100.84.23:/d/backends/patchy5 148.100.84.23:/d/backends/patchy6 force'
ok   5 [    185/   5106] <  39> 'gluster --mode=script --wignore volume start patchy'
ok   6 [     45/   5115] <  13> 'gluster --mode=script --wignore volume rebalance patchy start'
ok   7 [     30/    990] <  14> 'completed rebalance_status_field patchy'
ok   8 [    281/      4] <  16> '[ completed == completed ]'
ok   9 [     22/   2636] <  20> 'gluster --mode=script --wignore volume replace-brick patchy 148.100.84.23:/d/backends/patchy1 148.100.84.23:/d/backends/patchy1_replace commit force'
ok  10 [     99/      2] <  22> '[ reset == reset ]'
ok  11 [     25/   1137] <  26> 'gluster --mode=script --wignore volume reset-brick patchy 148.100.84.23:/d/backends/patchy2 start'
ok  12 [     78/   1819] <  27> 'gluster --mode=script --wignore volume reset-brick patchy 148.100.84.23:/d/backends/patchy2 148.100.84.23:/d/backends/patchy2 commit force'
ok  13 [    133/      1] <  29> '[ reset == reset ]'
ok
All tests successful.
Files=1, Tests=13, 23 wallclock secs ( 0.01 usr  0.00 sys +  0.76 cusr  0.97 csys =  1.74 CPU)
Result: PASS
Logs preserved in tarball reset-rebalance-state-iteration-1.tar.gz
End of test ./tests/bugs/glusterd/reset-rebalance-state.t
================================================================================


======================================== (472 / 839) ========================================
[01:22:10] Running tests in file ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
FATAL: command execution failed
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
	at hudson.remoting.Command.readFrom(Command.java:141)
	at hudson.remoting.Command.readFrom(Command.java:127)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: java.io.IOException: Backing channel 'builder-el8-s390x-3.ibm-l1.gluster.org' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:227)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:306)
	at jdk.proxy2/jdk.proxy2.$Proxy199.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1212)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1204)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
FATAL: Unable to delete script file /tmp/jenkins12641252847262807307.sh
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
	at hudson.remoting.Command.readFrom(Command.java:141)
	at hudson.remoting.Command.readFrom(Command.java:127)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 26eeaf3e:builder-el8-s390x-3.ibm-l1.gluster.org": Remote call on builder-el8-s390x-3.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:1105)
	at hudson.FilePath.act(FilePath.java:1228)
	at hudson.FilePath.act(FilePath.java:1217)
	at hudson.FilePath.delete(FilePath.java:1764)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
java.lang.NullPointerException: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
	at hudson.slaves.WorkspaceList.tempDir(WorkspaceList.java:313)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:61)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:83)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:116)
	at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:567)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
ERROR: builder-el8-s390x-3.ibm-l1.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list