[Gluster-Maintainers] Build failed in Jenkins: centos8-s390-regression #152

jenkins at build.gluster.org jenkins at build.gluster.org
Wed Jun 14 20:06:43 UTC 2023


See <https://build.gluster.org/job/centos8-s390-regression/152/display/redirect>

Changes:


------------------------------------------
[...truncated 2.84 MB...]
All tests successful.
Files=1, Tests=15,  7 wallclock secs ( 0.02 usr  0.00 sys +  0.54 cusr  0.48 csys =  1.04 CPU)
Result: PASS
Logs preserved in tarball bug-1243798-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1243798.t
================================================================================


======================================== (577 / 839) ========================================
[19:56:17] Running tests in file ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t .. 
1..29
ok   1 [    161/   1199] <  13> 'glusterd'
ok   2 [     13/      7] <  14> 'pidof glusterd'
No volumes present
ok   3 [     12/     57] <  15> 'gluster --mode=script --wignore volume info'
ok   4 [     12/    102] <  17> 'gluster --mode=script --wignore volume create patchy replica 2 148.100.84.19:/d/backends/1 148.100.84.19:/d/backends/2'
ok   5 [     12/     57] <  18> 'Created volinfo_field patchy Status'
ok   6 [     12/   1504] <  20> 'gluster --mode=script --wignore volume start patchy'
ok   7 [     14/     58] <  21> 'Started volinfo_field patchy Status'
ok   8 [     12/   1182] <  23> 'gluster --mode=script --wignore volume quota patchy enable'
ok   9 [     13/     56] <  24> 'on volinfo_field patchy features.quota'
ok  10 [     12/     55] <  25> 'on volinfo_field patchy features.inode-quota'
ok  11 [     12/     55] <  26> 'on volinfo_field patchy features.quota-deem-statfs'
ok  12 [     12/    107] <  28> 'gluster --mode=script --wignore volume reset patchy'
ok  13 [     13/     57] <  29> 'on volinfo_field patchy features.quota'
ok  14 [     12/     56] <  30> 'on volinfo_field patchy features.inode-quota'
ok  15 [     12/     55] <  31> 'on volinfo_field patchy features.quota-deem-statfs'
ok  16 [     11/    104] <  33> 'gluster --mode=script --wignore volume reset patchy force'
ok  17 [     13/     56] <  34> 'on volinfo_field patchy features.quota'
ok  18 [     12/     56] <  35> 'on volinfo_field patchy features.inode-quota'
ok  19 [     14/     59] <  36> 'on volinfo_field patchy features.quota-deem-statfs'
ok  20 [     12/    179] <  38> 'gluster --mode=script --wignore volume reset patchy features.quota-deem-statfs'
ok  21 [     14/     57] <  39> 'on volinfo_field patchy features.quota-deem-statfs'
ok  22 [     12/     99] <  41> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs off'
ok  23 [     13/     57] <  42> 'off volinfo_field patchy features.quota-deem-statfs'
ok  24 [     12/    101] <  44> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs on'
ok  25 [     12/     56] <  45> 'on volinfo_field patchy features.quota-deem-statfs'
ok  26 [     12/    114] <  47> 'gluster --mode=script --wignore volume quota patchy disable'
ok  27 [     12/     57] <  48> 'off volinfo_field patchy features.quota'
ok  28 [     12/     60] <  49> 'off volinfo_field patchy features.inode-quota'
ok  29 [     13/     57] <  50> ' volinfo_field patchy features.quota-deem-statfs'
ok
All tests successful.
Files=1, Tests=29,  7 wallclock secs ( 0.02 usr  0.00 sys +  1.45 cusr  0.82 csys =  2.29 CPU)
Result: PASS
Logs preserved in tarball bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
================================================================================


======================================== (578 / 839) ========================================
[19:56:24] Running tests in file ./tests/bugs/quota/bug-1260545.t
./tests/bugs/quota/bug-1260545.t .. 
1..18
ok   1 [    219/   1200] <  12> 'glusterd'
No volumes present
ok   2 [     13/     56] <  13> 'gluster --mode=script --wignore volume info'
ok   3 [     12/    103] <  15> 'gluster --mode=script --wignore volume create patchy 148.100.84.19:/d/backends/patchy1 148.100.84.19:/d/backends/patchy2'
ok   4 [     13/    409] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     13/   1252] <  18> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     12/     16] <  20> 'glusterfs --volfile-id=patchy --volfile-server=148.100.84.19 /mnt/glusterfs/0'
ok   7 [     12/    136] <  22> 'gluster --mode=script --wignore volume quota patchy limit-usage / 11MB'
ok   8 [     12/     84] <  23> 'gluster --mode=script --wignore volume quota patchy hard-timeout 0'
ok   9 [     12/     83] <  24> 'gluster --mode=script --wignore volume quota patchy soft-timeout 0'
ok  10 [     12/   5420] <  26> './tests/bugs/quota/quota /mnt/glusterfs/0/f1 256 40'
ok  11 [     13/    132] <  28> '10.0MB quotausage /'
ok  12 [     12/   5100] <  38> 'gluster --mode=script --wignore volume remove-brick patchy 148.100.84.19:/d/backends/patchy2 start'
ok  13 [     13/    377] <  39> 'completed remove_brick_status_completed_field patchy 148.100.84.19:/d/backends/patchy2'
ok  14 [     12/      1] <  42> '[ -f /d/backends/patchy1/f1 ]'
ok  15 [     11/      2] <  43> '[ -f /mnt/glusterfs/0/f1 ]'
ok  16 [     66/      1] <  47> '[ 0 = 0 ]'
ok  17 [     13/      1] <  48> '[ 0 = 0 ]'
ok  18 [     11/    112] <  50> '10.0MB quotausage /'
ok
All tests successful.
Files=1, Tests=18, 15 wallclock secs ( 0.02 usr  0.00 sys +  0.80 cusr  0.56 csys =  1.38 CPU)
Result: PASS
Logs preserved in tarball bug-1260545-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1260545.t
================================================================================


======================================== (579 / 839) ========================================
[19:56:39] Running tests in file ./tests/bugs/quota/bug-1287996.t
./tests/bugs/quota/bug-1287996.t .. 
1..6
ok   1 [    150/   2362] <  12> 'launch_cluster 2'
ok   2 [     12/     72] <  14> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/patchy'
ok   3 [     11/    207] <  15> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume start patchy'
ok   4 [     12/   1093] <  16> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume quota patchy enable'
ok   5 [     13/  40123] <  18> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log peer probe 127.1.1.2'
ok   6 [     13/   6749] <  19> '1 check_peers'
ok
All tests successful.
Files=1, Tests=6, 51 wallclock secs ( 0.02 usr  0.00 sys +  0.58 cusr  0.41 csys =  1.01 CPU)
Result: PASS
Logs preserved in tarball bug-1287996-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1287996.t
================================================================================


======================================== (580 / 839) ========================================
[19:57:30] Running tests in file ./tests/bugs/quota/bug-1292020.t
./tests/bugs/quota/bug-1292020.t .. 
1..10
ok   1 [    160/   1184] <  13> 'glusterd'
ok   2 [     12/      8] <  14> 'pidof glusterd'
ok   3 [     12/     75] <  16> 'gluster --mode=script --wignore volume create patchy 148.100.84.19:/d/backends/patchy'
ok   4 [     12/    217] <  17> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     13/   1141] <  18> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     12/    129] <  19> 'gluster --mode=script --wignore volume quota patchy limit-usage / 1'
ok   7 [     12/     15] <  21> 'glusterfs --volfile-server=148.100.84.19 --volfile-id=patchy /mnt/glusterfs/0'
ok   8 [     11/   4948] <  24> 'passed write_sample_data'
ok   9 [     14/   1174] <  26> 'gluster --mode=script --wignore volume stop patchy'
ok  10 [     13/    534] <  27> 'gluster --mode=script --wignore volume delete patchy'
ok
All tests successful.
Files=1, Tests=10, 10 wallclock secs ( 0.02 usr  0.00 sys +  0.41 cusr  0.38 csys =  0.81 CPU)
Result: PASS
Logs preserved in tarball bug-1292020-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1292020.t
================================================================================


======================================== (581 / 839) ========================================
[19:57:41] Running tests in file ./tests/bugs/quota/bug-1293601.t
./tests/bugs/quota/bug-1293601.t .. 
1..10
ok   1 [    164/   1192] <   8> 'glusterd'
ok   2 [     12/     96] <  10> 'gluster --mode=script --wignore volume create patchy replica 2 148.100.84.19:/d/backends/patchy1 148.100.84.19:/d/backends/patchy2 148.100.84.19:/d/backends/patchy3 148.100.84.19:/d/backends/patchy4'
ok   3 [     12/   1900] <  11> 'gluster --mode=script --wignore volume start patchy'
ok   4 [     14/    401] <  12> '4 online_brick_count'
ok   5 [     13/   1180] <  13> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     13/     22] <  15> 'glusterfs --volfile-server=148.100.84.19 --volfile-id=patchy /mnt/glusterfs/0'
ok   7 [   3735/    144] <  26> '1.0MB quotausage /'
ok   8 [     13/    172] <  28> 'gluster --mode=script --wignore volume quota patchy disable'
ok   9 [     14/   1326] <  29> 'gluster --mode=script --wignore volume quota patchy enable'
ok  10 [     25/   2698] <  31> '1.0MB quotausage /'
ok
All tests successful.
Files=1, Tests=10, 13 wallclock secs ( 0.02 usr  0.00 sys +  1.33 cusr  1.82 csys =  3.17 CPU)
Result: PASS
Logs preserved in tarball bug-1293601-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1293601.t
================================================================================


======================================== (582 / 839) ========================================
[19:57:54] Running tests in file ./tests/bugs/readdir-ahead/bug-1390050.t
./tests/bugs/readdir-ahead/bug-1390050.t .. 
1..10
ok   1 [    155/   1194] <   9> 'glusterd'
ok   2 [     12/     88] <  11> 'gluster --mode=script --wignore volume create patchy 148.100.84.19:/d/backends/patchy 148.100.84.19:/patchy'
ok   3 [     12/     99] <  12> 'gluster --mode=script --wignore volume set patchy readdir-ahead on'
ok   4 [     14/     90] <  17> 'gluster --mode=script --wignore volume set patchy performance.md-cache-timeout 600'
ok   5 [     12/    525] <  18> 'gluster --mode=script --wignore volume start patchy'
ok   6 [     13/     17] <  19> 'glusterfs --volfile-server=148.100.84.19 --volfile-id=patchy /mnt/glusterfs/0'
ok   7 [     71/      7] <  21> 'mkdir -p /mnt/glusterfs/0/subdir1/subdir2'
ok   8 [     14/     17] <  23> 'touch /mnt/glusterfs/0/subdir1/subdir2/file0 /mnt/glusterfs/0/subdir1/subdir2/file1 /mnt/glusterfs/0/subdir1/subdir2/file2 /mnt/glusterfs/0/subdir1/subdir2/file3 /mnt/glusterfs/0/subdir1/subdir2/file4 /mnt/glusterfs/0/subdir1/subdir2/file5 /mnt/glusterfs/0/subdir1/subdir2/file6 /mnt/glusterfs/0/subdir1/subdir2/file7 /mnt/glusterfs/0/subdir1/subdir2/file8 /mnt/glusterfs/0/subdir1/subdir2/file9 /mnt/glusterfs/0/subdir1/subdir2/file10'
ok   9 [     14/     52] <  25> 'build_tester ./tests/bugs/readdir-ahead/bug-1390050.c -o ./tests/bugs/readdir-ahead/rdd-tester'
ok  10 [     12/      4] <  26> './tests/bugs/readdir-ahead/rdd-tester /mnt/glusterfs/0/subdir1/subdir2 /mnt/glusterfs/0/subdir1/subdir2/file4'
ok
All tests successful.
Files=1, Tests=10,  3 wallclock secs ( 0.02 usr  0.00 sys +  0.34 cusr  0.35 csys =  0.71 CPU)
Result: PASS
Logs preserved in tarball bug-1390050-iteration-1.tar.gz
End of test ./tests/bugs/readdir-ahead/bug-1390050.t
================================================================================


======================================== (583 / 839) ========================================
[19:57:57] Running tests in file ./tests/bugs/readdir-ahead/bug-1436090.t
Timeout set is 300, default 200
FATAL: command execution failed
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2911)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3406)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:932)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:375)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:142)
	at hudson.remoting.Command.readFrom(Command.java:128)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: java.io.IOException: Backing channel 'builder-el8-s390x-2.ibm-l1.gluster.org' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:215)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
	at com.sun.proxy.$Proxy150.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1215)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1207)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:526)
	at hudson.model.Run.execute(Run.java:1900)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
FATAL: Unable to delete script file /tmp/jenkins2012472865752048751.sh
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2911)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3406)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:932)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:375)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
	at hudson.remoting.Command.readFrom(Command.java:142)
	at hudson.remoting.Command.readFrom(Command.java:128)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 62ffa46b:builder-el8-s390x-2.ibm-l1.gluster.org": Remote call on builder-el8-s390x-2.ibm-l1.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.FilePath.act(FilePath.java:1192)
	at hudson.FilePath.act(FilePath.java:1181)
	at hudson.FilePath.delete(FilePath.java:1728)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:526)
	at hudson.model.Run.execute(Run.java:1900)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: null
java.lang.NullPointerException
	at hudson.slaves.WorkspaceList.tempDir(WorkspaceList.java:313)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:61)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:22)
	at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:83)
	at org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:116)
	at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:566)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:530)
	at hudson.model.Run.execute(Run.java:1900)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
ERROR: builder-el8-s390x-2.ibm-l1.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list