[Gluster-Maintainers] Build failed in Jenkins: centos8-regression #1766
jenkins at build.gluster.org
jenkins at build.gluster.org
Fri May 2 17:52:28 UTC 2025
See <https://build.gluster.org/job/centos8-regression/1766/display/redirect>
Changes:
------------------------------------------
[...truncated 2.90 MiB...]
ok 14 [ 26/ 1811] < 37> '0 STAT /d/backends/patchy2/file2'
ok 15 [ 19/ 160] < 40> '10.0MB quotausage /'
ok
All tests successful.
Files=1, Tests=15, 11 wallclock secs ( 0.02 usr 0.01 sys + 1.09 cusr 0.89 csys = 2.01 CPU)
Result: PASS
Logs preserved in tarball bug-1178130-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1178130.t
================================================================================
======================================== (575 / 840) ========================================
Skipping bad test file ./tests/bugs/quota/bug-1235182.t
Reason: bug(s): 000000
================================================================================
======================================== (576 / 840) ========================================
[17:31:12] Running tests in file ./tests/bugs/quota/bug-1243798.t
./tests/bugs/quota/bug-1243798.t ..
1..15
ok 1 [ 242/ 2205] < 11> 'glusterd'
ok 2 [ 19/ 131] < 13> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy'
ok 3 [ 18/ 144] < 14> 'gluster --mode=script --wignore volume set patchy nfs.disable false'
ok 4 [ 19/ 1196] < 15> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 27/ 285] < 17> '1 is_nfs_export_available'
ok 6 [ 20/ 36] < 18> 'mount_nfs 172.30.1.95:/patchy /mnt/nfs/0 noac,nolock'
ok 7 [ 19/ 12] < 20> 'mkdir -p /mnt/nfs/0/dir1/dir2'
ok 8 [ 18/ 8] < 21> 'touch /mnt/nfs/0/dir1/dir2/file'
ok 9 [ 17/ 1166] < 23> 'gluster --mode=script --wignore volume quota patchy enable'
ok 10 [ 24/ 122] < 24> 'gluster --mode=script --wignore volume quota patchy hard-timeout 0'
ok 11 [ 19/ 122] < 25> 'gluster --mode=script --wignore volume quota patchy soft-timeout 0'
ok 12 [ 19/ 156] < 26> 'gluster --mode=script --wignore volume quota patchy limit-objects /dir1 10'
ok 13 [ 19/ 9] < 28> 'stat /mnt/nfs/0/dir1/dir2/file'
getfattr: Removing leading '/' from absolute path names
ok 14 [ 2046/ 158] < 42> '2 quota_object_list_field /dir1 5'
ok 15 [ 19/ 28] < 44> 'Y force_umount /mnt/nfs/0'
ok
All tests successful.
Files=1, Tests=15, 9 wallclock secs ( 0.02 usr 0.00 sys + 0.92 cusr 0.71 csys = 1.65 CPU)
Result: PASS
Logs preserved in tarball bug-1243798-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1243798.t
================================================================================
======================================== (577 / 840) ========================================
[17:31:21] Running tests in file ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t ..
1..29
ok 1 [ 241/ 2205] < 13> 'glusterd'
ok 2 [ 18/ 16] < 14> 'pidof glusterd'
No volumes present
ok 3 [ 18/ 102] < 15> 'gluster --mode=script --wignore volume info'
ok 4 [ 18/ 133] < 17> 'gluster --mode=script --wignore volume create patchy replica 2 172.30.1.95:/d/backends/1 172.30.1.95:/d/backends/2'
ok 5 [ 18/ 108] < 18> 'Created volinfo_field patchy Status'
ok 6 [ 18/ 1297] < 20> 'gluster --mode=script --wignore volume start patchy'
ok 7 [ 35/ 107] < 21> 'Started volinfo_field patchy Status'
ok 8 [ 19/ 1232] < 23> 'gluster --mode=script --wignore volume quota patchy enable'
ok 9 [ 27/ 107] < 24> 'on volinfo_field patchy features.quota'
ok 10 [ 19/ 106] < 25> 'on volinfo_field patchy features.inode-quota'
ok 11 [ 18/ 104] < 26> 'on volinfo_field patchy features.quota-deem-statfs'
ok 12 [ 19/ 150] < 28> 'gluster --mode=script --wignore volume reset patchy'
ok 13 [ 19/ 111] < 29> 'on volinfo_field patchy features.quota'
ok 14 [ 19/ 106] < 30> 'on volinfo_field patchy features.inode-quota'
ok 15 [ 19/ 106] < 31> 'on volinfo_field patchy features.quota-deem-statfs'
ok 16 [ 19/ 144] < 33> 'gluster --mode=script --wignore volume reset patchy force'
ok 17 [ 19/ 105] < 34> 'on volinfo_field patchy features.quota'
ok 18 [ 19/ 107] < 35> 'on volinfo_field patchy features.inode-quota'
ok 19 [ 19/ 106] < 36> 'on volinfo_field patchy features.quota-deem-statfs'
ok 20 [ 18/ 129] < 38> 'gluster --mode=script --wignore volume reset patchy features.quota-deem-statfs'
ok 21 [ 19/ 107] < 39> 'on volinfo_field patchy features.quota-deem-statfs'
ok 22 [ 19/ 148] < 41> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs off'
ok 23 [ 19/ 107] < 42> 'off volinfo_field patchy features.quota-deem-statfs'
ok 24 [ 19/ 144] < 44> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs on'
ok 25 [ 19/ 105] < 45> 'on volinfo_field patchy features.quota-deem-statfs'
ok 26 [ 19/ 199] < 47> 'gluster --mode=script --wignore volume quota patchy disable'
ok 27 [ 22/ 111] < 48> 'off volinfo_field patchy features.quota'
ok 28 [ 19/ 108] < 49> 'off volinfo_field patchy features.inode-quota'
ok 29 [ 19/ 109] < 50> ' volinfo_field patchy features.quota-deem-statfs'
ok
All tests successful.
Files=1, Tests=29, 9 wallclock secs ( 0.03 usr 0.00 sys + 2.46 cusr 1.42 csys = 3.91 CPU)
Result: PASS
Logs preserved in tarball bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
================================================================================
======================================== (578 / 840) ========================================
[17:31:30] Running tests in file ./tests/bugs/quota/bug-1260545.t
./tests/bugs/quota/bug-1260545.t ..
1..18
ok 1 [ 310/ 2199] < 12> 'glusterd'
No volumes present
ok 2 [ 18/ 103] < 13> 'gluster --mode=script --wignore volume info'
ok 3 [ 18/ 136] < 15> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy1 172.30.1.95:/d/backends/patchy2'
ok 4 [ 18/ 215] < 16> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 19/ 1293] < 18> 'gluster --mode=script --wignore volume quota patchy enable'
ok 6 [ 19/ 43] < 20> 'glusterfs --volfile-id=patchy --volfile-server=172.30.1.95 /mnt/glusterfs/0'
ok 7 [ 25/ 224] < 22> 'gluster --mode=script --wignore volume quota patchy limit-usage / 11MB'
ok 8 [ 19/ 136] < 23> 'gluster --mode=script --wignore volume quota patchy hard-timeout 0'
ok 9 [ 19/ 127] < 24> 'gluster --mode=script --wignore volume quota patchy soft-timeout 0'
ok 10 [ 19/ 671] < 26> './tests/bugs/quota/quota /mnt/glusterfs/0/f1 256 40'
ok 11 [ 20/ 162] < 28> '10.0MB quotausage /'
ok 12 [ 19/ 5143] < 38> 'gluster --mode=script --wignore volume remove-brick patchy 172.30.1.95:/d/backends/patchy2 start'
ok 13 [ 20/ 124] < 39> 'completed remove_brick_status_completed_field patchy 172.30.1.95:/d/backends/patchy2'
ok 14 [ 19/ 1] < 42> '[ -f /d/backends/patchy1/f1 ]'
ok 15 [ 19/ 5] < 43> '[ -f /mnt/glusterfs/0/f1 ]'
ok 16 [ 127/ 1] < 47> '[ 0 = 0 ]'
ok 17 [ 22/ 1] < 48> '[ 0 = 0 ]'
ok 18 [ 19/ 166] < 50> '10.0MB quotausage /'
ok
All tests successful.
Files=1, Tests=18, 12 wallclock secs ( 0.02 usr 0.01 sys + 1.23 cusr 0.93 csys = 2.19 CPU)
Result: PASS
Logs preserved in tarball bug-1260545-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1260545.t
================================================================================
======================================== (579 / 840) ========================================
[17:31:42] Running tests in file ./tests/bugs/quota/bug-1287996.t
./tests/bugs/quota/bug-1287996.t ..
1..6
ok 1 [ 235/ 4549] < 12> 'launch_cluster 2'
ok 2 [ 19/ 152] < 14> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/patchy'
ok 3 [ 19/ 197] < 15> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume start patchy'
ok 4 [ 20/ 1195] < 16> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume quota patchy enable'
ok 5 [ 26/ 188] < 18> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log peer probe 127.1.1.2'
ok 6 [ 19/ 1203] < 19> '1 check_peers'
ok
All tests successful.
Files=1, Tests=6, 8 wallclock secs ( 0.02 usr 0.00 sys + 0.85 cusr 0.63 csys = 1.50 CPU)
Result: PASS
Logs preserved in tarball bug-1287996-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1287996.t
================================================================================
======================================== (580 / 840) ========================================
[17:31:50] Running tests in file ./tests/bugs/quota/bug-1292020.t
./tests/bugs/quota/bug-1292020.t ..
1..10
ok 1 [ 246/ 2275] < 13> 'glusterd'
ok 2 [ 18/ 14] < 14> 'pidof glusterd'
ok 3 [ 18/ 131] < 16> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy'
ok 4 [ 21/ 174] < 17> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 20/ 1189] < 18> 'gluster --mode=script --wignore volume quota patchy enable'
ok 6 [ 21/ 161] < 19> 'gluster --mode=script --wignore volume quota patchy limit-usage / 1'
ok 7 [ 19/ 31] < 21> 'glusterfs --volfile-server=172.30.1.95 --volfile-id=patchy /mnt/glusterfs/0'
ok 8 [ 19/ 4632] < 24> 'passed write_sample_data'
ok 9 [ 19/ 1130] < 26> 'gluster --mode=script --wignore volume stop patchy'
ok 10 [ 19/ 849] < 27> 'gluster --mode=script --wignore volume delete patchy'
ok
All tests successful.
Files=1, Tests=10, 12 wallclock secs ( 0.02 usr 0.00 sys + 0.69 cusr 0.60 csys = 1.31 CPU)
Result: PASS
Logs preserved in tarball bug-1292020-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1292020.t
================================================================================
======================================== (581 / 840) ========================================
[17:32:02] Running tests in file ./tests/bugs/quota/bug-1293601.t
Logs preserved in tarball bug-1293601-iteration-1.tar.gz
./tests/bugs/quota/bug-1293601.t timed out after 200 seconds
./tests/bugs/quota/bug-1293601.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
FATAL: command execution failed
java.io.EOFException
at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
at hudson.remoting.Command.readFrom(Command.java:141)
at hudson.remoting.Command.readFrom(Command.java:127)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: java.io.IOException: Backing channel 'builder-c8-1.int.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:227)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:306)
at jdk.proxy2/jdk.proxy2.$Proxy199.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1212)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1204)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
at hudson.model.Run.execute(Run.java:1831)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:445)
FATAL: Unable to delete script file /tmp/jenkins16318644585776504388.sh
java.io.EOFException
at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
at hudson.remoting.Command.readFrom(Command.java:141)
at hudson.remoting.Command.readFrom(Command.java:127)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 30ccd7d0:builder-c8-1.int.aws.gluster.org": Remote call on builder-c8-1.int.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:1105)
at hudson.FilePath.act(FilePath.java:1228)
at hudson.FilePath.act(FilePath.java:1217)
at hudson.FilePath.delete(FilePath.java:1764)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
at hudson.model.Run.execute(Run.java:1831)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:445)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
java.lang.NullPointerException: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
at hudson.slaves.WorkspaceList.tempDir(WorkspaceList.java:313)
at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:61)
at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:83)
at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:116)
at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:567)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1831)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:445)
ERROR: builder-c8-1.int.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list