[Gluster-Maintainers] Build failed in Jenkins: centos8-regression #1787

jenkins at build.gluster.org jenkins at build.gluster.org
Fri May 23 17:42:33 UTC 2025


See <https://build.gluster.org/job/centos8-regression/1787/display/redirect>

Changes:


------------------------------------------
[...truncated 2.90 MiB...]
./tests/bugs/quota/bug-1243798.t .. 
1..15
ok   1 [    245/   2208] <  11> 'glusterd'
ok   2 [     18/    128] <  13> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy'
ok   3 [     18/    132] <  14> 'gluster --mode=script --wignore volume set patchy nfs.disable false'
ok   4 [     18/   1179] <  15> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     19/    296] <  17> '1 is_nfs_export_available'
ok   6 [     18/     39] <  18> 'mount_nfs 172.30.1.95:/patchy /mnt/nfs/0 noac,nolock'
ok   7 [     18/     10] <  20> 'mkdir -p /mnt/nfs/0/dir1/dir2'
ok   8 [     17/      7] <  21> 'touch /mnt/nfs/0/dir1/dir2/file'
ok   9 [     17/   1166] <  23> 'gluster --mode=script --wignore volume quota patchy enable'
ok  10 [     24/    121] <  24> 'gluster --mode=script --wignore volume quota patchy hard-timeout 0'
ok  11 [     18/    121] <  25> 'gluster --mode=script --wignore volume quota patchy soft-timeout 0'
ok  12 [     18/    156] <  26> 'gluster --mode=script --wignore volume quota patchy limit-objects /dir1 10'
ok  13 [     18/      9] <  28> 'stat /mnt/nfs/0/dir1/dir2/file'
getfattr: Removing leading '/' from absolute path names
ok  14 [   2044/    162] <  42> '2 quota_object_list_field /dir1 5'
ok  15 [     19/     31] <  44> 'Y force_umount /mnt/nfs/0'
ok
All tests successful.
Files=1, Tests=15,  8 wallclock secs ( 0.02 usr  0.00 sys +  0.87 cusr  0.75 csys =  1.64 CPU)
Result: PASS
Logs preserved in tarball bug-1243798-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1243798.t
================================================================================


======================================== (577 / 840) ========================================
[17:31:54] Running tests in file ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t .. 
1..29
ok   1 [    239/   2234] <  13> 'glusterd'
ok   2 [     18/     16] <  14> 'pidof glusterd'
No volumes present
ok   3 [     18/    104] <  15> 'gluster --mode=script --wignore volume info'
ok   4 [     18/    135] <  17> 'gluster --mode=script --wignore volume create patchy replica 2 172.30.1.95:/d/backends/1 172.30.1.95:/d/backends/2'
ok   5 [     19/    108] <  18> 'Created volinfo_field patchy Status'
ok   6 [     18/   1260] <  20> 'gluster --mode=script --wignore volume start patchy'
ok   7 [     23/    119] <  21> 'Started volinfo_field patchy Status'
ok   8 [     18/   1240] <  23> 'gluster --mode=script --wignore volume quota patchy enable'
ok   9 [     20/    109] <  24> 'on volinfo_field patchy features.quota'
ok  10 [     18/    105] <  25> 'on volinfo_field patchy features.inode-quota'
ok  11 [     19/    108] <  26> 'on volinfo_field patchy features.quota-deem-statfs'
ok  12 [     19/    151] <  28> 'gluster --mode=script --wignore volume reset patchy'
ok  13 [     19/    105] <  29> 'on volinfo_field patchy features.quota'
ok  14 [     18/    104] <  30> 'on volinfo_field patchy features.inode-quota'
ok  15 [     19/    104] <  31> 'on volinfo_field patchy features.quota-deem-statfs'
ok  16 [     18/    138] <  33> 'gluster --mode=script --wignore volume reset patchy force'
ok  17 [     18/    108] <  34> 'on volinfo_field patchy features.quota'
ok  18 [     19/    110] <  35> 'on volinfo_field patchy features.inode-quota'
ok  19 [     19/    109] <  36> 'on volinfo_field patchy features.quota-deem-statfs'
ok  20 [     19/    131] <  38> 'gluster --mode=script --wignore volume reset patchy features.quota-deem-statfs'
ok  21 [     18/    105] <  39> 'on volinfo_field patchy features.quota-deem-statfs'
ok  22 [     18/    148] <  41> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs off'
ok  23 [     19/    108] <  42> 'off volinfo_field patchy features.quota-deem-statfs'
ok  24 [     19/    150] <  44> 'gluster --mode=script --wignore volume set patchy features.quota-deem-statfs on'
ok  25 [     19/    109] <  45> 'on volinfo_field patchy features.quota-deem-statfs'
ok  26 [     19/    192] <  47> 'gluster --mode=script --wignore volume quota patchy disable'
ok  27 [     20/    106] <  48> 'off volinfo_field patchy features.quota'
ok  28 [     19/    109] <  49> 'off volinfo_field patchy features.inode-quota'
ok  29 [     19/    109] <  50> ' volinfo_field patchy features.quota-deem-statfs'
ok
All tests successful.
Files=1, Tests=29,  8 wallclock secs ( 0.03 usr  0.00 sys +  2.37 cusr  1.50 csys =  3.90 CPU)
Result: PASS
Logs preserved in tarball bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
================================================================================


======================================== (578 / 840) ========================================
[17:32:03] Running tests in file ./tests/bugs/quota/bug-1260545.t
./tests/bugs/quota/bug-1260545.t .. 
1..18
ok   1 [    313/   2222] <  12> 'glusterd'
No volumes present
ok   2 [     18/    102] <  13> 'gluster --mode=script --wignore volume info'
ok   3 [     19/    136] <  15> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy1 172.30.1.95:/d/backends/patchy2'
ok   4 [     18/    216] <  16> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     18/   1272] <  18> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     21/     58] <  20> 'glusterfs --volfile-id=patchy --volfile-server=172.30.1.95 /mnt/glusterfs/0'
ok   7 [     20/    192] <  22> 'gluster --mode=script --wignore volume quota patchy limit-usage / 11MB'
ok   8 [     19/    136] <  23> 'gluster --mode=script --wignore volume quota patchy hard-timeout 0'
ok   9 [     19/    127] <  24> 'gluster --mode=script --wignore volume quota patchy soft-timeout 0'
ok  10 [     19/    658] <  26> './tests/bugs/quota/quota /mnt/glusterfs/0/f1 256 40'
ok  11 [     19/    160] <  28> '10.0MB quotausage /'
ok  12 [     18/   5138] <  38> 'gluster --mode=script --wignore volume remove-brick patchy 172.30.1.95:/d/backends/patchy2 start'
ok  13 [     20/    116] <  39> 'completed remove_brick_status_completed_field patchy 172.30.1.95:/d/backends/patchy2'
ok  14 [     18/      1] <  42> '[ -f /d/backends/patchy1/f1 ]'
ok  15 [     18/      4] <  43> '[ -f /mnt/glusterfs/0/f1 ]'
ok  16 [    124/      1] <  47> '[ 0 = 0 ]'
ok  17 [     21/      1] <  48> '[ 0 = 0 ]'
ok  18 [     18/    164] <  50> '10.0MB quotausage /'
ok
All tests successful.
Files=1, Tests=18, 11 wallclock secs ( 0.02 usr  0.01 sys +  1.24 cusr  0.90 csys =  2.17 CPU)
Result: PASS
Logs preserved in tarball bug-1260545-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1260545.t
================================================================================


======================================== (579 / 840) ========================================
[17:32:15] Running tests in file ./tests/bugs/quota/bug-1287996.t
./tests/bugs/quota/bug-1287996.t .. 
1..6
ok   1 [    237/   4533] <  12> 'launch_cluster 2'
ok   2 [     18/    145] <  14> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume create patchy 127.1.1.1:/d/backends/1/patchy'
ok   3 [     18/    187] <  15> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume start patchy'
ok   4 [     19/   1187] <  16> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log volume quota patchy enable'
ok   5 [     22/    187] <  18> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/bug-1287996.t_cli1.log peer probe 127.1.1.2'
ok   6 [     19/   1233] <  19> '1 check_peers'
ok
All tests successful.
Files=1, Tests=6,  8 wallclock secs ( 0.02 usr  0.01 sys +  0.82 cusr  0.63 csys =  1.48 CPU)
Result: PASS
Logs preserved in tarball bug-1287996-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1287996.t
================================================================================


======================================== (580 / 840) ========================================
[17:32:23] Running tests in file ./tests/bugs/quota/bug-1292020.t
./tests/bugs/quota/bug-1292020.t .. 
1..10
ok   1 [    239/   2223] <  13> 'glusterd'
ok   2 [     17/     14] <  14> 'pidof glusterd'
ok   3 [     17/    128] <  16> 'gluster --mode=script --wignore volume create patchy 172.30.1.95:/d/backends/patchy'
ok   4 [     18/    165] <  17> 'gluster --mode=script --wignore volume start patchy'
ok   5 [     18/   1177] <  18> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     20/    159] <  19> 'gluster --mode=script --wignore volume quota patchy limit-usage / 1'
ok   7 [     19/     28] <  21> 'glusterfs --volfile-server=172.30.1.95 --volfile-id=patchy /mnt/glusterfs/0'
ok   8 [     19/   4345] <  24> 'passed write_sample_data'
ok   9 [     19/   1129] <  26> 'gluster --mode=script --wignore volume stop patchy'
ok  10 [     19/    861] <  27> 'gluster --mode=script --wignore volume delete patchy'
ok
All tests successful.
Files=1, Tests=10, 11 wallclock secs ( 0.02 usr  0.01 sys +  0.68 cusr  0.58 csys =  1.29 CPU)
Result: PASS
Logs preserved in tarball bug-1292020-iteration-1.tar.gz
End of test ./tests/bugs/quota/bug-1292020.t
================================================================================


======================================== (581 / 840) ========================================
[17:32:34] Running tests in file ./tests/bugs/quota/bug-1293601.t
./tests/bugs/quota/bug-1293601.t .. 
1..10
ok   1 [    246/   2300] <   8> 'glusterd'
ok   2 [     19/    152] <  10> 'gluster --mode=script --wignore volume create patchy replica 2 172.30.1.95:/d/backends/patchy1 172.30.1.95:/d/backends/patchy2 172.30.1.95:/d/backends/patchy3 172.30.1.95:/d/backends/patchy4'
ok   3 [     20/   1383] <  11> 'gluster --mode=script --wignore volume start patchy'
ok   4 [     33/    794] <  12> '4 online_brick_count'
ok   5 [     19/   1383] <  13> 'gluster --mode=script --wignore volume quota patchy enable'
ok   6 [     22/     82] <  15> 'glusterfs --volfile-server=172.30.1.95 --volfile-id=patchy /mnt/glusterfs/0'
ok   7 [  12277/    171] <  26> '1.0MB quotausage /'
ok   8 [     19/    285] <  28> 'gluster --mode=script --wignore volume quota patchy disable'
ok   9 [     35/   1679] <  29> 'gluster --mode=script --wignore volume quota patchy enable'
not ok  10 [    146/  87131] <  31> '1.0MB quotausage /' -> 'Got "515.0KB" instead of "1.0MB"'
Failed 1/10 subtests 

Test Summary Report
-------------------
./tests/bugs/quota/bug-1293601.t (Wstat: 0 Tests: 10 Failed: 1)
  Failed test:  10
Files=1, Tests=10, 111 wallclock secs ( 0.02 usr  0.00 sys +  2.07 cusr  2.89 csys =  4.98 CPU)
Result: FAIL
Logs preserved in tarball bug-1293601-iteration-1.tar.gz
./tests/bugs/quota/bug-1293601.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

FATAL: command execution failed
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
	at hudson.remoting.Command.readFrom(Command.java:141)
	at hudson.remoting.Command.readFrom(Command.java:127)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: java.io.IOException: Backing channel 'builder-c8-1.int.aws.gluster.org' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:227)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:306)
	at jdk.proxy2/jdk.proxy2.$Proxy199.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1212)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1204)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
FATAL: Unable to delete script file /tmp/jenkins3190245313791646289.sh
java.io.EOFException
	at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2933)
	at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3428)
	at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:985)
	at java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:416)
	at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:50)
	at hudson.remoting.Command.readFrom(Command.java:141)
	at hudson.remoting.Command.readFrom(Command.java:127)
	at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:62)
Caused: java.io.IOException: Unexpected termination of the channel
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:80)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at e1e915:builder-c8-1.int.aws.gluster.org": Remote call on builder-c8-1.int.aws.gluster.org failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:1105)
	at hudson.FilePath.act(FilePath.java:1228)
	at hudson.FilePath.act(FilePath.java:1217)
	at hudson.FilePath.delete(FilePath.java:1764)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:527)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
java.lang.NullPointerException: Cannot invoke "hudson.FilePath.getName()" because "ws" is null
	at hudson.slaves.WorkspaceList.tempDir(WorkspaceList.java:313)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:61)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:83)
	at PluginClassLoader for credentials-binding//org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:116)
	at hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:567)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
	at hudson.model.Run.execute(Run.java:1831)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:445)
ERROR: builder-c8-1.int.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64


More information about the maintainers mailing list