[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4771
jenkins at build.gluster.org
jenkins at build.gluster.org
Sat Oct 26 16:03:55 UTC 2019
See <https://build.gluster.org/job/regression-test-burn-in/4771/display/redirect>
Changes:
------------------------------------------
[...truncated 1.63 MB...]
ok 19 [ 11/ 67] < 43> '! gluster --mode=script --wignore volume info patchy'
ok
All tests successful.
Files=1, Tests=19, 20 wallclock secs ( 0.02 usr 0.00 sys + 0.74 cusr 0.70 csys = 1.46 CPU)
Result: PASS
Logs preserved in tarball meta-iteration-1.tar
End of test ./tests/basic/meta.t
================================================================================
================================================================================
[15:40:19] Running tests in file ./tests/basic/mgmt_v3-locks.t
./tests/basic/mgmt_v3-locks.t ..
1..14
ok 1 [ 256/ 4147] < 80> 'launch_cluster 3'
ok 2 [ 11/ 136] < 81> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/mgmt_v3-locks.t_cli1.log peer probe 127.1.1.2'
ok 3 [ 21/ 150] < 82> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/mgmt_v3-locks.t_cli1.log peer probe 127.1.1.3'
ok 4 [ 26/ 97] < 84> '2 check_peers'
volume create: patchy: success: please start the volume to access data
volume create: patchy1: success: please start the volume to access data
ok 5 [ 434/ 70] < 87> 'Created volinfo_field patchy Status'
ok 6 [ 11/ 67] < 88> 'Created volinfo_field patchy1 Status'
volume start: patchy1: success
volume start: patchy: success
ok 7 [ 861/ 71] < 91> 'Started volinfo_field patchy Status'
ok 8 [ 11/ 72] < 92> 'Started volinfo_field patchy1 Status'
volume remove-brick start: failed: Another transaction is in progress for patchy. Please try again after some time.
volume remove-brick start: success
ID: ea9f7ac1-64d9-4555-8936-bdc9d7030c07
ok 9 [ 5353/ 74] < 97> '2 check_peers'
volume remove-brick start: failed: Another transaction is in progress for patchy1. Please try again after some time.
volume remove-brick start: success
ID: a42598ad-e683-422d-b45c-77905ff4e38a
ok 10 [ 5193/ 80] < 102> '2 check_peers'
volume set: success
volume set: success
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy 49161 0 Y 5850
Brick 127.1.1.2:/d/backends/2/patchy 49163 0 Y 5893
Task Status of Volume patchy
------------------------------------------------------------------------------
Task : Remove brick
ID : ea9f7ac1-64d9-4555-8936-bdc9d7030c07
Removed bricks:
127.1.1.2:/d/backends/2/patchy
Status : completed
Status of volume: patchy1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy1 49163 0 Y 5894
Brick 127.1.1.2:/d/backends/2/patchy1 49162 0 Y 5872
Task Status of Volume patchy1
------------------------------------------------------------------------------
Task : Remove brick
ID : a42598ad-e683-422d-b45c-77905ff4e38a
Removed bricks:
127.1.1.2:/d/backends/2/patchy1
Status : completed
Number of Peers: 2
Hostname: 127.1.1.2
Uuid: b3bdf6af-6786-46ab-bf79-9fc74d155689
State: Peer in Cluster (Connected)
Hostname: 127.1.1.3
Uuid: 341ae11f-51bc-4f0a-80a6-dcf3f6e7aee5
State: Peer in Cluster (Disconnected)
ok 11 [ 474/ 69] < 112> '1 check_peers'
ok 12 [ 11/ 63] < 113> 'Started volinfo_field patchy Status'
ok 13 [ 10/ 55] < 114> 'Started volinfo_field patchy1 Status'
ok 14 [ 11/ 1165] < 116> 'glusterd --xlator-option management.working-directory=/d/backends/3/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.3 --xlator-option management.run-directory=/d/backends/3/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/3/glusterd/gd.sock --xlator-option management.cluster-test-mode=/var/log/glusterfs/3 --log-file=/var/log/glusterfs/3/mgmt_v3-locks.t_glusterd3.log --pid-file=/d/backends/3/glusterd.pid'
Status of volume: patchy
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy 49161 0 Y 5850
Brick 127.1.1.2:/d/backends/2/patchy 49163 0 Y 5893
Task Status of Volume patchy
------------------------------------------------------------------------------
Task : Remove brick
ID : ea9f7ac1-64d9-4555-8936-bdc9d7030c07
Removed bricks:
127.1.1.2:/d/backends/2/patchy
Status : completed
Status of volume: patchy1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 127.1.1.1:/d/backends/1/patchy1 49163 0 Y 5894
Brick 127.1.1.2:/d/backends/2/patchy1 49162 0 Y 5872
Task Status of Volume patchy1
------------------------------------------------------------------------------
Task : Remove brick
ID : a42598ad-e683-422d-b45c-77905ff4e38a
Removed bricks:
127.1.1.2:/d/backends/2/patchy1
Status : completed
Number of Peers: 2
Hostname: 127.1.1.2
Uuid: b3bdf6af-6786-46ab-bf79-9fc74d155689
State: Peer in Cluster (Connected)
Hostname: 127.1.1.3
Uuid: 341ae11f-51bc-4f0a-80a6-dcf3f6e7aee5
State: Peer in Cluster (Disconnected)
ok
All tests successful.
Files=1, Tests=14, 19 wallclock secs ( 0.03 usr 0.01 sys + 1.57 cusr 0.87 csys = 2.48 CPU)
Result: PASS
Logs preserved in tarball mgmt_v3-locks-iteration-1.tar
End of test ./tests/basic/mgmt_v3-locks.t
================================================================================
================================================================================
[15:40:39] Running tests in file ./tests/basic/mount-nfs-auth.t
Logs preserved in tarball mount-nfs-auth-iteration-1.tar
./tests/basic/mount-nfs-auth.t timed out after 200 seconds
./tests/basic/mount-nfs-auth.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball mount-nfs-auth-iteration-2.tar
./tests/basic/mount-nfs-auth.t timed out after 200 seconds
End of test ./tests/basic/mount-nfs-auth.t
================================================================================
================================================================================
[15:47:19] Running tests in file ./tests/basic/mount.t
Logs preserved in tarball mount-iteration-1.tar
./tests/basic/mount.t timed out after 200 seconds
./tests/basic/mount.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball mount-iteration-2.tar
./tests/basic/mount.t timed out after 200 seconds
End of test ./tests/basic/mount.t
================================================================================
================================================================================
[15:54:41] Running tests in file ./tests/basic/mpx-compat.t
./tests/basic/mpx-compat.t ..
1..12
ok 1 [ 37586/ 1112] < 24> 'glusterd'
ok 2 [ 10/ 62] < 25> 'gluster --mode=script --wignore volume set all cluster.brick-multiplex yes'
ok 3 [ 12/ 100] < 28> 'gluster --mode=script --wignore volume create patchy builder205.int.aws.gluster.org:/d/backends/brick-patchy-0 builder205.int.aws.gluster.org:/d/backends/brick-patchy-1'
ok 4 [ 13/ 106] < 29> 'gluster --mode=script --wignore volume create patchy1 builder205.int.aws.gluster.org:/d/backends/brick-patchy1-0 builder205.int.aws.gluster.org:/d/backends/brick-patchy1-1'
volume set: success
ok 5 [ 142/ 1225] < 35> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 14/ 2100] < 36> 'gluster --mode=script --wignore volume start patchy1'
ok 7 [ 45012/ 33] < 42> '1 count_processes'
ok 8 [ 12/ 60] < 43> '1 count_brick_pids'
ok 9 [ 9/ 83] < 46> 'gluster --mode=script --wignore volume stop patchy1'
ok 10 [ 9/ 82] < 47> 'gluster --mode=script --wignore volume set patchy1 server.manage-gids no'
ok 11 [ 12/ 1083] < 48> 'gluster --mode=script --wignore volume start patchy1'
ok 12 [ 11/ 22] < 51> '2 count_processes'
ok
All tests successful.
Files=1, Tests=12, 90 wallclock secs ( 0.01 usr 0.00 sys + 0.65 cusr 0.55 csys = 1.21 CPU)
Result: PASS
Logs preserved in tarball mpx-compat-iteration-1.tar
End of test ./tests/basic/mpx-compat.t
================================================================================
================================================================================
[15:56:11] Running tests in file ./tests/basic/multiple-volume-shd-mux.t
FATAL: command execution failed
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:140)
at hudson.remoting.Command.readFrom(Command.java:126)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: java.io.IOException: Backing channel 'builder205.aws.gluster.org' is disconnected.
at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:214)
at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283)
at com.sun.proxy.$Proxy96.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1150)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1142)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
FATAL: Unable to delete script file /tmp/jenkins2895310929306659156.sh
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2680)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3155)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:861)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:140)
at hudson.remoting.Command.readFrom(Command.java:126)
at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:36)
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:63)
Caused: java.io.IOException: Unexpected termination of the channel
at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:77)
Caused: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on builder205.aws.gluster.org failed. The channel is closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:950)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.delete(FilePath.java:1542)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:123)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:741)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1815)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Build step 'Execute shell' marked build as failure
ERROR: builder205.aws.gluster.org is offline; cannot locate java-1.6.0-openjdk-1.6.0.0.x86_64
More information about the maintainers
mailing list