[Gluster-Maintainers] Build failed in Jenkins: centos8-regression #10
jenkins at build.gluster.org
jenkins at build.gluster.org
Wed May 13 18:44:14 UTC 2020
See <https://build.gluster.org/job/centos8-regression/10/display/redirect?page=changes>
Changes:
[Sunny Kumar] build: geo-rep sub-pkg requires policycoreutils-python-utils on rhel8
------------------------------------------
[...truncated 2.95 MB...]
ok 21 [ 5127/ 1] < 45> '[[ 0 -eq 0 ]]'
ok
All tests successful.
Files=1, Tests=21, 23 wallclock secs ( 0.02 usr 0.01 sys + 1.50 cusr 0.87 csys = 2.40 CPU)
Result: PASS
Logs preserved in tarball bug-979365-iteration-1.tar
End of test ./tests/bugs/replicate/bug-979365.t
================================================================================
================================================================================
[18:38:27] Running tests in file ./tests/bugs/replicate/bug-986905.t
./tests/bugs/replicate/bug-986905.t ..
1..12
ok 1 [ 205/ 2453] < 14> 'glusterd'
ok 2 [ 14/ 8] < 15> 'pidof glusterd'
ok 3 [ 15/ 141] < 16> 'gluster --mode=script --wignore volume create patchy replica 2 builder212.int.aws.gluster.org:/d/backends/patchy0 builder212.int.aws.gluster.org:/d/backends/patchy1'
ok 4 [ 14/ 1431] < 17> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 18/ 56] < 18> 'glusterfs --volfile-id=/patchy --volfile-server=builder212.int.aws.gluster.org /mnt/glusterfs/0 --attribute-timeout=0 --entry-timeout=0'
ok 6 [ 19/ 2156] < 19> 'kill_brick patchy builder212.int.aws.gluster.org /d/backends/patchy0'
ok 7 [ 15/ 11] < 20> 'touch /mnt/glusterfs/0/a'
ok 8 [ 15/ 9] < 21> 'ln /mnt/glusterfs/0/a /mnt/glusterfs/0/link_a'
ok 9 [ 14/ 177] < 22> 'gluster --mode=script --wignore volume start patchy force'
ok 10 [ 16/ 1847] < 23> '1 afr_child_up_status patchy 0'
ok 11 [ 15/ 12] < 24> 'ls -l /mnt/glusterfs/0'
ok 12 [ 19/ 6] < 26> '12748571 get_inum /d/backends/patchy0/link_a'
ok
All tests successful.
Files=1, Tests=12, 9 wallclock secs ( 0.03 usr 0.00 sys + 0.77 cusr 0.74 csys = 1.54 CPU)
Result: PASS
Logs preserved in tarball bug-986905-iteration-1.tar
End of test ./tests/bugs/replicate/bug-986905.t
================================================================================
================================================================================
[18:38:36] Running tests in file ./tests/bugs/replicate/mdata-heal-no-xattrs.t
./tests/bugs/replicate/mdata-heal-no-xattrs.t ..
1..32
ok 1 [ 199/ 2429] < 7> 'glusterd'
ok 2 [ 14/ 8] < 8> 'pidof glusterd'
ok 3 [ 15/ 141] < 9> 'gluster --mode=script --wignore volume create patchy replica 3 builder212.int.aws.gluster.org:/d/backends/patchy0 builder212.int.aws.gluster.org:/d/backends/patchy1 builder212.int.aws.gluster.org:/d/backends/patchy2'
ok 4 [ 15/ 160] < 10> 'gluster --mode=script --wignore volume set patchy cluster.self-heal-daemon off'
ok 5 [ 15/ 1447] < 11> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 22/ 37] < 13> 'glusterfs --volfile-id=/patchy --volfile-server=builder212.int.aws.gluster.org /mnt/glusterfs/0 --attribute-timeout=0 --entry-timeout=0'
ok 7 [ 29/ 64] < 14> '1 afr_child_up_status patchy 0'
ok 8 [ 15/ 65] < 15> '1 afr_child_up_status patchy 1'
ok 9 [ 15/ 63] < 16> '1 afr_child_up_status patchy 2'
ok 10 [ 37/ 1] < 19> '[ 0 -eq 0 ]'
ok 11 [ 14/ 2] < 23> 'chmod +x /d/backends/patchy0/FILE'
ok 12 [ 24/ 2] < 29> 'ln /d/backends/patchy0/.glusterfs/indices/xattrop/xattrop-1b487cb0-114c-4baa-979c-28ca7955cadd /d/backends/patchy0/.glusterfs/indices/xattrop/5f8b2cbe-b08d-404a-bf9b-3a6d8ed394a8'
ok 13 [ 15/ 191] < 30> '^1$ get_pending_heal_count patchy'
ok 14 [ 16/ 208] < 32> 'gluster --mode=script --wignore volume set patchy cluster.self-heal-daemon on'
ok 15 [ 17/ 70] < 33> '1 afr_child_up_status_in_shd patchy 0'
ok 16 [ 15/ 64] < 34> '1 afr_child_up_status_in_shd patchy 1'
ok 17 [ 15/ 63] < 35> '1 afr_child_up_status_in_shd patchy 2'
ok 18 [ 15/ 113] < 36> 'gluster --mode=script --wignore volume heal patchy'
ok 19 [ 15/ 171] < 37> '^0$ get_pending_heal_count patchy'
ok 20 [ 15/ 7] < 41> '000000000000000000000000 get_hex_xattr trusted.afr.patchy-client-1 /d/backends/patchy0/FILE'
ok 21 [ 15/ 6] < 42> '000000000000000000000000 get_hex_xattr trusted.afr.patchy-client-2 /d/backends/patchy0/FILE'
ok 22 [ 15/ 2] < 43> '! getfattr -n trusted.afr.patchy-client-0 /d/backends/patchy0/FILE'
ok 23 [ 15/ 2] < 46> '! getfattr -n trusted.afr.patchy-client-0 /d/backends/patchy1/FILE'
ok 24 [ 15/ 2] < 47> '! getfattr -n trusted.afr.patchy-client-1 /d/backends/patchy1/FILE'
ok 25 [ 15/ 2] < 48> '! getfattr -n trusted.afr.patchy-client-2 /d/backends/patchy1/FILE'
ok 26 [ 15/ 2] < 49> '! getfattr -n trusted.afr.patchy-client-0 /d/backends/patchy2/FILE'
ok 27 [ 15/ 2] < 50> '! getfattr -n trusted.afr.patchy-client-1 /d/backends/patchy2/FILE'
ok 28 [ 15/ 2] < 51> '! getfattr -n trusted.afr.patchy-client-2 /d/backends/patchy2/FILE'
ok 29 [ 15/ 3] < 54> '755 stat -c %a /d/backends/patchy0/FILE'
ok 30 [ 15/ 4] < 55> '755 stat -c %a /d/backends/patchy1/FILE'
ok 31 [ 15/ 4] < 56> '755 stat -c %a /d/backends/patchy2/FILE'
ok 32 [ 15/ 13] < 58> 'Y force_umount /mnt/glusterfs/0'
ok
All tests successful.
Files=1, Tests=32, 6 wallclock secs ( 0.02 usr 0.01 sys + 1.19 cusr 1.01 csys = 2.23 CPU)
Result: PASS
Logs preserved in tarball mdata-heal-no-xattrs-iteration-1.tar
End of test ./tests/bugs/replicate/mdata-heal-no-xattrs.t
================================================================================
================================================================================
[18:38:43] Running tests in file ./tests/bugs/replicate/ta-inode-refresh-read.t
./tests/bugs/replicate/ta-inode-refresh-read.t ..
1..19
ok 1 [ 208/ 5] < 9> 'ta_create_brick_and_volfile brick0'
ok 2 [ 14/ 5] < 10> 'ta_create_brick_and_volfile brick1'
ok 3 [ 15/ 5] < 11> 'ta_create_ta_and_volfile ta'
ok 4 [ 14/ 47] < 12> 'ta_start_brick_process brick0'
ok 5 [ 14/ 71] < 13> 'ta_start_brick_process brick1'
ok 6 [ 15/ 56] < 14> 'ta_start_ta_process ta'
ok 7 [ 15/ 3] < 16> 'ta_create_mount_volfile brick0 brick1 ta'
ok 8 [ 17/ 1] < 19> '[ 0 -eq 0 ]'
ok 9 [ 18/ 1] < 21> '[ 0 -eq 0 ]'
ok 10 [ 15/ 37] < 23> 'ta_start_mount_process /mnt/glusterfs/0'
ok 11 [ 16/ 12] < 24> '1 ta_up_status patchy /mnt/glusterfs/0 0'
ok 12 [ 14/ 7] < 25> 'trusted.afr.patchy-ta-2 ls /d/backends/ta'
ok 13 [ 15/ 10] < 27> 'touch /mnt/glusterfs/0/FILE'
ok 14 [ 14/ 3] < 28> 'ls /d/backends/brick0/FILE'
ok 15 [ 14/ 3] < 29> 'ls /d/backends/brick1/FILE'
ok 16 [ 15/ 3] < 30> '! ls /d/backends/ta/FILE'
ok 17 [ 14/ 5] < 31> 'setfattr -n user.name -v ravi /mnt/glusterfs/0/FILE'
ok 18 [ 20/ 2] < 37> 'rm -f /d/backends/brick0/.glusterfs/5f/3f/5f3f70dc-aa73-407d-a643-e2f5e9a3d12b'
getfattr: Removing leading '/' from absolute path names
ok 19 [ 14/ 14] < 38> 'getfattr -n user.name /mnt/glusterfs/0/FILE'
ok
All tests successful.
Files=1, Tests=19, 1 wallclock secs ( 0.02 usr 0.01 sys + 0.32 cusr 0.50 csys = 0.85 CPU)
Result: PASS
Logs preserved in tarball ta-inode-refresh-read-iteration-1.tar
End of test ./tests/bugs/replicate/ta-inode-refresh-read.t
================================================================================
================================================================================
[18:38:44] Running tests in file ./tests/bugs/rpc/bug-1043886.t
./tests/bugs/rpc/bug-1043886.t ..
1..21
ok 1 [ 198/ 2460] < 10> 'glusterd'
ok 2 [ 14/ 8] < 11> 'pidof glusterd'
ok 3 [ 14/ 140] < 12> 'gluster --mode=script --wignore volume create patchy replica 2 builder212.int.aws.gluster.org:/d/backends/patchy1 builder212.int.aws.gluster.org:/d/backends/patchy2'
ok 4 [ 15/ 148] < 13> 'gluster --mode=script --wignore volume set patchy nfs.disable false'
ok 5 [ 16/ 2385] < 14> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 46/ 36] < 17> 'glusterfs --entry-timeout=0 --attribute-timeout=0 -s builder212.int.aws.gluster.org --volfile-id patchy /mnt/glusterfs/0'
ok 7 [ 28/ 15] < 19> '1 is_nfs_export_available'
ok 8 [ 14/ 30] < 22> 'mount_nfs builder212.int.aws.gluster.org:/patchy /mnt/nfs/0 nolock'
ok 9 [ 25/ 160] < 31> 'gluster --mode=script --wignore volume set patchy server.root-squash on'
ok 10 [ 15/ 159] < 32> 'gluster --mode=script --wignore volume set patchy server.anonuid 22162'
ok 11 [ 16/ 156] < 33> 'gluster --mode=script --wignore volume set patchy server.anongid 5845'
ok 12 [ 16/ 13] < 35> '1 is_nfs_export_available'
ok 13 [ 23/ 1] < 41> '[ 1 -ne 0 ]'
ok 14 [ 22/ 1] < 43> '[ 1 -ne 0 ]'
ok 15 [ 15/ 11] < 47> 'touch /mnt/glusterfs/0/other/file'
ok 16 [ 19/ 1] < 48> '[ 22162:5845 = 22162:5845 ]'
ok 17 [ 15/ 6] < 49> 'mkdir /mnt/glusterfs/0/other/dir'
ok 18 [ 18/ 1] < 50> '[ 22162:5845 = 22162:5845 ]'
ok 19 [ 15/ 29] < 53> 'Y umount_nfs /mnt/nfs/0'
ok 20 [ 14/ 4139] < 55> 'gluster --mode=script --wignore volume stop patchy'
ok 21 [ 14/ 4876] < 56> 'gluster --mode=script --wignore volume delete patchy'
ok
All tests successful.
Files=1, Tests=21, 15 wallclock secs ( 0.02 usr 0.01 sys + 0.98 cusr 0.68 csys = 1.69 CPU)
Result: PASS
Logs preserved in tarball bug-1043886-iteration-1.tar
End of test ./tests/bugs/rpc/bug-1043886.t
================================================================================
================================================================================
[18:39:00] Running tests in file ./tests/bugs/rpc/bug-847624.t
./tests/bugs/rpc/bug-847624.t ..
1..12
ok 1 [ 208/ 2480] < 12> 'glusterd'
ok 2 [ 16/ 12] < 13> 'pidof glusterd'
ok 3 [ 15/ 139] < 15> 'gluster --mode=script --wignore volume create patchy builder212.int.aws.gluster.org:/d/backends/patchy'
ok 4 [ 14/ 134] < 16> 'gluster --mode=script --wignore volume set patchy nfs.disable off'
ok 5 [ 15/ 137] < 17> 'gluster --mode=script --wignore volume set patchy nfs.drc on'
ok 6 [ 15/ 1318] < 18> 'gluster --mode=script --wignore volume start patchy'
ok 7 [ 24/ 291] < 19> '1 is_nfs_export_available'
ok 8 [ 16/ 35] < 20> 'mount_nfs builder212.int.aws.gluster.org:/patchy /mnt/nfs/0 nolock'
ok 9 [ 35/ 12511] < 23> 'dbench -t 10 10'
ok 10 [ 30/ 299] < 24> 'rm -rf /mnt/nfs/0/clients'
ok 11 [ 16/ 29] < 26> 'Y force_umount /mnt/nfs/0'
ok 12 [ 15/ 170] < 28> 'gluster --mode=script --wignore volume set patchy nfs.drc-size 10000'
ok
All tests successful.
Files=1, Tests=12, 18 wallclock secs ( 0.02 usr 0.00 sys + 1.10 cusr 2.24 csys = 3.36 CPU)
Result: PASS
Logs preserved in tarball bug-847624-iteration-1.tar
End of test ./tests/bugs/rpc/bug-847624.t
================================================================================
================================================================================
[18:39:18] Running tests in file ./tests/bugs/rpc/bug-884452.t
FATAL: command execution failed
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder212.int.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:216)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
at com.sun.proxy.$Proxy83.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1147)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1139)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
FATAL: Unable to delete script file /tmp/jenkins3393386157178472497.sh
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 22277858:builder212.int.aws.gluster.org": Remote call on builder212.int.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:991)
at hudson.FilePath.act(FilePath.java:1069)
at hudson.FilePath.act(FilePath.java:1058)
at hudson.FilePath.delete(FilePath.java:1539)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
Build step 'Execute shell' marked build as failure
FATAL: null
java.lang.NullPointerException
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.tempDir(UnbindableDir.java:67)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:62)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:23)
at org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:84)
at org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:108)
at hudson.model.Build$BuildExecution.doRun(Build.java:174)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
ERROR: builder212.int.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list