[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #5036
jenkins at build.gluster.org
jenkins at build.gluster.org
Wed Apr 22 16:21:09 UTC 2020
See <https://build.gluster.org/job/regression-test-burn-in/5036/display/redirect?page=changes>
Changes:
[Pranith Kumar K] tests: Fix spurious failure of tests/basic/quick-read-with-upcall.t
------------------------------------------
[...truncated 1.63 MB...]
ok 5 [ 324/ 51] < 87> 'Created volinfo_field patchy Status'
ok 6 [ 9/ 53] < 88> 'Created volinfo_field patchy1 Status'
volume start: patchy: success
volume start: patchy1: success
ok 7 [ 584/ 53] < 91> 'Started volinfo_field patchy Status'
ok 8 [ 9/ 54] < 92> 'Started volinfo_field patchy1 Status'
volume remove-brick start: failed: Locking failed on 127.1.1.3. Please check log file for details.
Locking failed on 127.1.1.2. Please check log file for details.
volume remove-brick start: failed: Locking failed on 127.1.1.1. Please check log file for details.
ok 9 [ 88/ 55] < 97> '2 check_peers'
volume remove-brick start: failed: Another transaction is in progress for patchy1. Please try again after some time.
volume remove-brick start: success
ID: 441ab084-65ec-428f-a7ab-7229f55c3125
ok 10 [ 5200/ 82] < 102> '2 check_peers'
volume set: success
volume set: success
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy 49161 0 Y 5915
Brick 127.1.1.2:/d/backends/2/patchy 49165 0 Y 5960
Task Status of Volume patchy
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: patchy1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy1 49163 0 Y 5923
Brick 127.1.1.2:/d/backends/2/patchy1 49160 0 Y 5900
Task Status of Volume patchy1
------------------------------------------------------------------------------
Task : Remove brick
ID : 441ab084-65ec-428f-a7ab-7229f55c3125
Removed bricks:
127.1.1.2:/d/backends/2/patchy1
Status : completed
Number of Peers: 2
Hostname: 127.1.1.2
Uuid: 03effa89-19a5-4898-96f0-a527f81e3ee2
State: Peer in Cluster (Connected)
Hostname: 127.1.1.3
Uuid: 9807fc10-6bbe-484c-aaa5-6dfbcf1cace4
State: Peer in Cluster (Disconnected)
ok 11 [ 582/ 53] < 112> '1 check_peers'
ok 12 [ 9/ 51] < 113> 'Started volinfo_field patchy Status'
ok 13 [ 9/ 52] < 114> 'Started volinfo_field patchy1 Status'
ok 14 [ 10/ 1158] < 116> 'glusterd --xlator-option management.working-directory=/d/backends/3/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.3 --xlator-option management.run-directory=/d/backends/3/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/3/glusterd/gd.sock --xlator-option management.cluster-test-mode=/var/log/glusterfs/3 --log-file=/var/log/glusterfs/3/mgmt_v3-locks.t_glusterd3.log --pid-file=/d/backends/3/glusterd.pid'
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy 49161 0 Y 5915
Brick 127.1.1.2:/d/backends/2/patchy 49165 0 Y 5960
Task Status of Volume patchy
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: patchy1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy1 49163 0 Y 5923
Brick 127.1.1.2:/d/backends/2/patchy1 49160 0 Y 5900
Task Status of Volume patchy1
------------------------------------------------------------------------------
Task : Remove brick
ID : 441ab084-65ec-428f-a7ab-7229f55c3125
Removed bricks:
127.1.1.2:/d/backends/2/patchy1
Status : completed
Number of Peers: 2
Hostname: 127.1.1.2
Uuid: 03effa89-19a5-4898-96f0-a527f81e3ee2
State: Peer in Cluster (Connected)
Hostname: 127.1.1.3
Uuid: 9807fc10-6bbe-484c-aaa5-6dfbcf1cace4
State: Peer in Cluster (Disconnected)
ok
All tests successful.
Files=1, Tests=14, 13 wallclock secs ( 0.03 usr 0.00 sys + 1.40 cusr 0.88 csys = 2.31 CPU)
Result: PASS
Logs preserved in tarball mgmt_v3-locks-iteration-1.tar
End of test ./tests/basic/mgmt_v3-locks.t
================================================================================
================================================================================
[15:47:33] Running tests in file ./tests/basic/mount-nfs-auth.t
Logs preserved in tarball mount-nfs-auth-iteration-1.tar
./tests/basic/mount-nfs-auth.t timed out after 200 seconds
./tests/basic/mount-nfs-auth.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball mount-nfs-auth-iteration-2.tar
./tests/basic/mount-nfs-auth.t timed out after 200 seconds
End of test ./tests/basic/mount-nfs-auth.t
================================================================================
================================================================================
[15:54:53] Running tests in file ./tests/basic/mount.t
Logs preserved in tarball mount-iteration-1.tar
./tests/basic/mount.t timed out after 200 seconds
./tests/basic/mount.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball mount-iteration-2.tar
./tests/basic/mount.t timed out after 200 seconds
End of test ./tests/basic/mount.t
================================================================================
================================================================================
[16:01:33] Running tests in file ./tests/basic/mpx-compat.t
Logs preserved in tarball mpx-compat-iteration-1.tar
./tests/basic/mpx-compat.t timed out after 200 seconds
./tests/basic/mpx-compat.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
./tests/basic/mpx-compat.t ..
1..12
ok 1 [ 372/ 1060] < 24> 'glusterd'
ok 2 [ 9/ 61] < 25> 'gluster --mode=script --wignore volume set all cluster.brick-multiplex yes'
ok 3 [ 12/ 77] < 28> 'gluster --mode=script --wignore volume create patchy builder209.int.aws.gluster.org:/d/backends/brick-patchy-0 builder209.int.aws.gluster.org:/d/backends/brick-patchy-1'
ok 4 [ 13/ 100] < 29> 'gluster --mode=script --wignore volume create patchy1 builder209.int.aws.gluster.org:/d/backends/brick-patchy1-0 builder209.int.aws.gluster.org:/d/backends/brick-patchy1-1'
volume set: success
ok 5 [ 121/ 1250] < 35> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 9/ 2067] < 36> 'gluster --mode=script --wignore volume start patchy1'
ok 7 [ 45013/ 29] < 42> '1 count_processes'
ok 8 [ 10/ 77] < 43> '1 count_brick_pids'
ok 9 [ 23/ 91] < 46> 'gluster --mode=script --wignore volume stop patchy1'
ok 10 [ 9/ 698] < 47> 'gluster --mode=script --wignore volume set patchy1 server.manage-gids no'
ok 11 [ 9/ 1096] < 48> 'gluster --mode=script --wignore volume start patchy1'
ok 12 [ 9/ 16] < 51> '2 count_processes'
ok
All tests successful.
Files=1, Tests=12, 53 wallclock secs ( 0.02 usr 0.01 sys + 0.66 cusr 0.53 csys = 1.22 CPU)
Result: PASS
Logs preserved in tarball mpx-compat-iteration-2.tar
End of test ./tests/basic/mpx-compat.t
================================================================================
================================================================================
[16:05:56] Running tests in file ./tests/basic/multiple-volume-shd-mux.t
Logs preserved in tarball multiple-volume-shd-mux-iteration-1.tar
./tests/basic/multiple-volume-shd-mux.t timed out after 200 seconds
./tests/basic/multiple-volume-shd-mux.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball multiple-volume-shd-mux-iteration-2.tar
./tests/basic/multiple-volume-shd-mux.t timed out after 200 seconds
End of test ./tests/basic/multiple-volume-shd-mux.t
================================================================================
================================================================================
[16:12:36] Running tests in file ./tests/basic/multiplex.t
FATAL: command execution failed
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder209.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:216)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
at com.sun.proxy.$Proxy86.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1147)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1139)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
FATAL: Unable to delete script file /tmp/jenkins8199672240192153016.sh
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2735)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3210)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:895)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel at 60d26b94:builder209.aws.gluster.org": Remote call on builder209.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:991)
at hudson.FilePath.act(FilePath.java:1069)
at hudson.FilePath.act(FilePath.java:1058)
at hudson.FilePath.delete(FilePath.java:1539)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1856)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:428)
Build step 'Execute shell' marked build as failure
ERROR: builder209.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list