[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1923
jenkins at build.gluster.org
jenkins at build.gluster.org
Mon Jun 22 22:49:13 UTC 2020
See <https://build.gluster.org/job/regression-test-with-multiplex/1923/display/redirect>
Changes:
------------------------------------------
[...truncated 2.42 MB...]
ok 7 [ 11/ 127] < 27> 'gluster --mode=script --wignore volume set patchy cluster.quorum-type fixed'
ok 8 [ 11/ 4642] < 28> 'gluster --mode=script --wignore volume start patchy'
ok 9 [ 19/ 584] < 30> 'gluster --mode=script --wignore volume set patchy cluster.quorum-count 1'
ok 10 [ 201/ 640] < 31> 'Y check_quorum_nfs'
ok 11 [ 11/ 468] < 32> 'gluster --mode=script --wignore volume set patchy cluster.quorum-count 2'
ok 12 [ 12/ 121] < 33> 'Y check_quorum_nfs'
ok 13 [ 170/ 1809] < 34> 'gluster --mode=script --wignore volume set patchy cluster.quorum-count 3'
ok 14 [ 88/ 600] < 35> 'Y check_quorum_nfs'
ok
All tests successful.
Files=1, Tests=14, 14 wallclock secs ( 0.02 usr 0.00 sys + 0.84 cusr 0.65 csys = 1.51 CPU)
Result: PASS
Logs preserved in tarball quorum-value-check-iteration-1.tar
End of test ./tests/bugs/glusterd/quorum-value-check.t
================================================================================
================================================================================
[22:38:52] Running tests in file ./tests/bugs/glusterd/rebalance-in-cluster.t
./tests/bugs/glusterd/rebalance-in-cluster.t ..
1..15
ok 1 [ 208/ 2810] < 12> 'launch_cluster 2'
ok 2 [ 13/ 136] < 13> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/rebalance-in-cluster.t_cli1.log peer probe 127.1.1.2'
ok 3 [ 17/ 135] < 15> '1 peer_count'
volume create: patchy: success: please start the volume to access data
ok 4 [ 170/ 88] < 18> 'Created cluster_volinfo_field 1 patchy Status'
volume start: patchy: success
ok 5 [ 570/ 73] < 21> 'Started cluster_volinfo_field 1 patchy Status'
ok 6 [ 11/ 57] < 26> 'glusterfs -s 127.1.1.1 --volfile-id=patchy /mnt/glusterfs/0'
ok 7 [ 12/ 24] < 28> 'mkdir /mnt/glusterfs/0/dir1 /mnt/glusterfs/0/dir2 /mnt/glusterfs/0/dir3 /mnt/glusterfs/0/dir4'
ok 8 [ 13/ 38] < 29> 'touch /mnt/glusterfs/0/dir1/files1 /mnt/glusterfs/0/dir1/files2 /mnt/glusterfs/0/dir1/files3 /mnt/glusterfs/0/dir1/files4 /mnt/glusterfs/0/dir2/files1 /mnt/glusterfs/0/dir2/files2 /mnt/glusterfs/0/dir2/files3 /mnt/glusterfs/0/dir2/files4 /mnt/glusterfs/0/dir3/files1 /mnt/glusterfs/0/dir3/files2 /mnt/glusterfs/0/dir3/files3 /mnt/glusterfs/0/dir3/files4 /mnt/glusterfs/0/dir4/files1 /mnt/glusterfs/0/dir4/files2 /mnt/glusterfs/0/dir4/files3 /mnt/glusterfs/0/dir4/files4'
ok 9 [ 12/ 262] < 31> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/rebalance-in-cluster.t_cli1.log volume add-brick patchy 127.1.1.1:/d/backends/1/patchy1 127.1.1.2:/d/backends/2/patchy1'
ok 10 [ 15/ 10162] < 33> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/rebalance-in-cluster.t_cli1.log volume rebalance patchy start'
ok 11 [ 109/ 1165] < 34> 'completed cluster_rebalance_status_field 1 patchy'
ok 12 [ 12/ 3] < 37> 'kill_glusterd 2'
ok 13 [ 12/ 76] < 38> 'completed rebalance_status_field_1 patchy'
ok 14 [ 12/ 1588] < 40> 'start_glusterd 2'
volume rebalance: patchy: success: Rebalance on patchy has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 992834fe-8906-4cec-93e8-48dea99128e9
ok 15 [ 1016/ 4092] < 49> 'Started cluster_volinfo_field 1 patchy Status'
ok
All tests successful.
Files=1, Tests=15, 23 wallclock secs ( 0.02 usr 0.01 sys + 0.96 cusr 0.63 csys = 1.62 CPU)
Result: PASS
Logs preserved in tarball rebalance-in-cluster-iteration-1.tar
End of test ./tests/bugs/glusterd/rebalance-in-cluster.t
================================================================================
================================================================================
[22:39:15] Running tests in file ./tests/bugs/glusterd/rebalance-operations-in-single-node.t
Logs preserved in tarball rebalance-operations-in-single-node-iteration-1.tar
./tests/bugs/glusterd/rebalance-operations-in-single-node.t timed out after 200 seconds
./tests/bugs/glusterd/rebalance-operations-in-single-node.t: bad status 124
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
Logs preserved in tarball rebalance-operations-in-single-node-iteration-2.tar
./tests/bugs/glusterd/rebalance-operations-in-single-node.t timed out after 200 seconds
End of test ./tests/bugs/glusterd/rebalance-operations-in-single-node.t
================================================================================
================================================================================
[22:45:57] Running tests in file ./tests/bugs/glusterd/remove-brick-in-cluster.t
./tests/bugs/glusterd/remove-brick-in-cluster.t ..
1..21
ok 1 [ 2362/ 3055] < 8> 'launch_cluster 2'
ok 2 [ 14/ 134] < 11> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy replica 2 127.1.1.1:/d/backends/1/patchy1 127.1.1.1:/d/backends/1/patchy2 127.1.1.1:/d/backends/1/patchy3 127.1.1.1:/d/backends/1/patchy4'
ok 3 [ 16/ 2402] < 12> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy'
ok 4 [ 11/ 200] < 14> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log peer probe 127.1.1.2'
ok 5 [ 14/ 1367] < 15> '1 peer_count'
ok 6 [ 12/ 5216] < 17> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli2.log volume remove-brick patchy 127.1.1.1:/d/backends/1/patchy3 127.1.1.1:/d/backends/1/patchy4 start'
ok 7 [ 51/ 92] < 18> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli2.log volume info'
ok 8 [ 12/ 156] < 21> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy1 127.1.1.1:/d/backends/1/patchy10 127.1.1.2:/d/backends/2/patchy11'
ok 9 [ 12/ 384] < 22> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy1'
ok 10 [ 13/ 5160] < 23> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy1 127.1.1.2:/d/backends/2/patchy11 start'
ok 11 [ 15/ 104] < 24> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume status'
ok 12 [ 13/ 3158] < 26> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume stop patchy'
ok 13 [ 13/ 67] < 27> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume delete patchy'
ok 14 [ 12/ 215] < 31> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy replica 3 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick4 127.1.1.1:/d/backends/1/brick5 127.1.1.2:/d/backends/2/brick6'
ok 15 [ 12/ 4441] < 36> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy'
ok 16 [ 97/ 232] < 39> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 2 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick6 force'
ok 17 [ 71/ 217] < 42> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick2 force'
ok 18 [ 25/ 3196] < 45> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 1 127.1.1.1:/d/backends/1/brick5 force'
ok 19 [ 13/ 4299] < 51> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 2 127.1.1.1:/d/backends/1/brick5 force'
ok 20 [ 12/ 271] < 54> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick2 force'
ok 21 [ 15/ 6473] < 57> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 3 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick6 force'
ok
All tests successful.
Files=1, Tests=21, 44 wallclock secs ( 0.03 usr 0.00 sys + 1.53 cusr 1.70 csys = 3.26 CPU)
Result: PASS
Logs preserved in tarball remove-brick-in-cluster-iteration-1.tar
End of test ./tests/bugs/glusterd/remove-brick-in-cluster.t
================================================================================
================================================================================
[22:46:41] Running tests in file ./tests/bugs/glusterd/remove-brick-testcases.t
./tests/bugs/glusterd/remove-brick-testcases.t ..
1..35
ok 1 [ 205/ 1419] < 20> 'glusterd'
ok 2 [ 11/ 17] < 21> 'pidof glusterd'
ok 3 [ 13/ 267] < 23> 'gluster --mode=script --wignore volume create patchy builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 builder208.int.aws.gluster.org:/d/backends/patchy4 builder208.int.aws.gluster.org:/d/backends/patchy5'
ok 4 [ 74/ 2041] < 24> 'gluster --mode=script --wignore volume start patchy'
OK
ok 5 [ 820/ 186] < 29> '0 brick_up_status patchy builder208.int.aws.gluster.org /d/backends/patchy1'
ok 6 [ 96/ 64] < 32> '! gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy1 start'
ok 7 [ 12/ 266] < 34> 'gluster --mode=script --wignore volume start patchy force'
ok 8 [ 186/ 276] < 35> '1 brick_up_status patchy builder208.int.aws.gluster.org /d/backends/patchy1'
ok 9 [ 155/ 5850] < 38> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy1 start'
ok 10 [ 12/ 1458] < 40> 'completed remove_brick_status_completed_field patchy builder208.int.aws.gluster.org:/d/backends/patchy1'
OK
ok 11 [ 434/ 435] < 46> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy1 commit'
ok 12 [ 182/ 5691] < 50> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 start'
ok 13 [ 208/ 135] < 54> '! gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/ABCD status'
ok 14 [ 76/ 134] < 55> '! gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/ABCD stop'
ok 15 [ 84/ 122] < 60> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 status'
ok 16 [ 75/ 235] < 61> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 stop'
ok 17 [ 12/ 216] < 64> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 force'
ok 18 [ 12/ 86] < 65> '3 brick_count patchy'
ok 19 [ 12/ 163] < 67> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy3 force'
ok 20 [ 99/ 92] < 68> '2 brick_count patchy'
ok 21 [ 14/ 88] < 76> 'decommissioned remove_brick_commit_status'
ok 22 [ 40/ 1128] < 78> 'gluster --mode=script --wignore volume stop patchy'
ok 23 [ 14/ 4050] < 79> 'gluster --mode=script --wignore volume delete patchy'
ok 24 [ 26/ 166] < 82> 'gluster --mode=script --wignore volume create patchy replica 3 builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 builder208.int.aws.gluster.org:/d/backends/patchy4 builder208.int.aws.gluster.org:/d/backends/patchy5 builder208.int.aws.gluster.org:/d/backends/patchy6'
ok 25 [ 73/ 2762] < 83> 'gluster --mode=script --wignore volume start patchy'
ok 26 [ 191/ 564] < 90> 'failed remove_brick_start_status'
ok 27 [ 288/ 556] < 98> 'decommissioned remove_brick_commit_status2'
ok 28 [ 181/ 269] < 99> 'gluster --mode=script --wignore volume info patchy'
ok 29 [ 32/ 735] < 108> 'success remove_brick_status'
ok 30 [ 164/ 190] < 109> 'gluster --mode=script --wignore volume info patchy'
ok 31 [ 126/ 7846] < 113> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy4 builder208.int.aws.gluster.org:/d/backends/patchy5 start'
volume remove-brick status: failed: Glusterd Syncop Mgmt brick op 'Rebalance' failed. Please check brick log file for details.
ok 32 [ 152/ 2458] < 114> 'completed remove_brick_status_completed_field patchy builder208.int.aws.gluster.org:/d/backends/patchy5'
ok 33 [ 128/ 260] < 115> 'completed remove_brick_status_completed_field patchy builder208.int.aws.gluster.org:/d/backends/patchy4'
ok 34 [ 88/ 387] < 116> 'gluster --mode=script --wignore volume remove-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy4 builder208.int.aws.gluster.org:/d/backends/patchy5 commit'
ok 35 [ 19/ 1297] < 117> 'gluster --mode=script --wignore volume remove-brick patchy replica 1 builder208.int.aws.gluster.org:/d/backends/patchy2 force'
ok
All tests successful.
Files=1, Tests=35, 48 wallclock secs ( 0.03 usr 0.00 sys + 2.33 cusr 1.56 csys = 3.92 CPU)
Result: PASS
Logs preserved in tarball remove-brick-testcases-iteration-1.tar
End of test ./tests/bugs/glusterd/remove-brick-testcases.t
================================================================================
================================================================================
[22:47:30] Running tests in file ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t ..
1..22
ok 1 [ 1929/ 2184] < 9> 'glusterd'
ok 2 [ 13/ 32] < 10> 'pidof glusterd'
No volumes present
ok 3 [ 12/ 185] < 11> 'gluster --mode=script --wignore volume info'
ok 4 [ 27/ 156] < 14> 'gluster --mode=script --wignore volume create patchy replica 2 builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 builder208.int.aws.gluster.org:/d/backends/patchy4 builder208.int.aws.gluster.org:/d/backends/patchy5 builder208.int.aws.gluster.org:/d/backends/patchy6'
ok 5 [ 16/ 3024] < 15> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 154/ 335] < 19> 'glusterfs -s builder208.int.aws.gluster.org --volfile-id patchy /mnt/glusterfs/0'
ok 7 [ 93/ 533] < 20> 'touch /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file2 /mnt/glusterfs/0/file3 /mnt/glusterfs/0/file4 /mnt/glusterfs/0/file5 /mnt/glusterfs/0/file6 /mnt/glusterfs/0/file7 /mnt/glusterfs/0/file8 /mnt/glusterfs/0/file9 /mnt/glusterfs/0/file10'
ok 8 [ 94/ 5729] < 29> 'success remove_brick_start_status'
ok 9 [ 235/ 2961] < 32> 'completed remove_brick_status_completed_field patchy builder208.int.aws.gluster.org:/d/backends/patchy6 builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy5'
ok 10 [ 106/ 916] < 40> 'success remove_brick_commit_status'
ok 11 [ 599/ 36] < 44> 'Replicate echo Replicate'
ok 12 [ 292/ 283] < 45> 'Y force_umount /mnt/glusterfs/0'
ok 13 [ 231/ 642] < 50> 'gluster --mode=script --wignore volume create patchy1 replica 3 builder208.int.aws.gluster.org:/d/backends/patchy10 builder208.int.aws.gluster.org:/d/backends/patchy11 builder208.int.aws.gluster.org:/d/backends/patchy12 builder208.int.aws.gluster.org:/d/backends/patchy13 builder208.int.aws.gluster.org:/d/backends/patchy14 builder208.int.aws.gluster.org:/d/backends/patchy15 builder208.int.aws.gluster.org:/d/backends/patchy16 builder208.int.aws.gluster.org:/d/backends/patchy17 builder208.int.aws.gluster.org:/d/backends/patchy18'
ok 14 [ 110/ 3228] < 51> 'gluster --mode=script --wignore volume start patchy1'
ok 15 [ 235/ 212] < 52> '9 brick_count patchy1'
ok 16 [ 16/ 279] < 55> 'glusterfs -s builder208.int.aws.gluster.org --volfile-id patchy1 /mnt/glusterfs/0'
ok 17 [ 36/ 24] < 56> 'touch /mnt/glusterfs/0/zerobytefile.txt'
ok 18 [ 12/ 117] < 57> 'mkdir /mnt/glusterfs/0/test_dir'
ok 19 [ 94/ 535] < 58> 'dd if=/dev/zero of=/mnt/glusterfs/0/file bs=1024 count=1024'
ok 20 [ 40/ 76] < 71> 'failed remove_brick_start'
ok 21 [ 103/ 2551] < 76> 'success remove_brick'
ok 22 [ 125/ 117] < 78> 'Y force_umount /mnt/glusterfs/0'
ok
All tests successful.
Files=1, Tests=22, 29 wallclock secs ( 0.03 usr 0.00 sys + 1.17 cusr 1.02 csys = 2.22 CPU)
Result: PASS
Logs preserved in tarball removing-multiple-bricks-in-single-remove-brick-command-iteration-1.tar
End of test ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
================================================================================
================================================================================
[22:48:00] Running tests in file ./tests/bugs/glusterd/replace-brick-operations.t
./tests/bugs/glusterd/replace-brick-operations.t ..
1..14
ok 1 [ 216/ 1517] < 11> 'glusterd'
ok 2 [ 11/ 19] < 12> 'pidof glusterd'
ok 3 [ 79/ 240] < 15> 'gluster --mode=script --wignore volume create patchy replica 2 builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy2'
ok 4 [ 26/ 4611] < 16> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 77/ 186] < 24> '! gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 start'
ok 6 [ 125/ 327] < 25> '! gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 status'
ok 7 [ 433/ 422] < 26> '! gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 abort'
ok 8 [ 146/ 2655] < 30> 'gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy2 builder208.int.aws.gluster.org:/d/backends/patchy3 commit force'
ok 9 [ 78/ 158] < 34> 'glusterfs --volfile-id=patchy --volfile-server=builder208.int.aws.gluster.org /mnt/glusterfs/0'
ok 10 [ 21/ 2254] < 37> 'gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy1 builder208.int.aws.gluster.org:/d/backends/patchy1_new commit force'
ok 11 [ 13/ 141] < 39> '1 afr_child_up_status patchy 1'
ok 12 [ 12/ 120] < 41> 'kill_brick patchy builder208.int.aws.gluster.org /d/backends/patchy1_new'
ok 13 [ 13/ 2218] < 44> 'gluster --mode=script --wignore volume replace-brick patchy builder208.int.aws.gluster.org:/d/backends/patchy1_new builder208.int.aws.gluster.org:/d/backends/patchy1_newer commit force'
ok 14 [ 68/ 210] < 46> '1 afr_child_up_status patchy 1'
ok
All tests successful.
Files=1, Tests=14, 16 wallclock secs ( 0.03 usr 0.01 sys + 0.71 cusr 0.74 csys = 1.49 CPU)
Result: PASS
Logs preserved in tarball replace-brick-operations-iteration-1.tar
End of test ./tests/bugs/glusterd/replace-brick-operations.t
================================================================================
================================================================================
[22:48:16] Running tests in file ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t ..
1..17
ok 1 [ 209/ 4413] < 24> 'launch_cluster 3'
ok 2 [ 12/ 135] < 25> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.2'
ok 3 [ 17/ 106] < 26> '1 check_peers'
ok 4 [ 12/ 155] < 28> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume create patchy replica 2 127.1.1.1:/d/backends/patchy 127.1.1.2:/d/backends/patchy'
ok 5 [ 13/ 2611] < 29> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume start patchy'
ok 6 [ 24/ 72] < 33> '! gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok 7 [ 13/ 1074] < 35> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy start'
ok 8 [ 13/ 4292] < 37> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok 9 [ 34/ 144] < 41> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.3'
ok 10 [ 17/ 1372] < 42> '2 peer_count'
ok 11 [ 12/ 82] < 44> '1 cluster_brick_up_status 1 patchy 127.1.1.1 /d/backends/patchy'
ok 12 [ 12/ 83] < 45> '1 cluster_brick_up_status 1 patchy 127.1.1.2 /d/backends/patchy'
ok 13 [ 12/ 87] < 46> 'Y shd_up_status_1'
ok 14 [ 12/ 88] < 47> 'Y shd_up_status_2'
ok 15 [ 122/ 3] < 53> 'kill_glusterd 1'
ok 16 [ 13/ 1721] < 56> 'glusterd --xlator-option management.working-directory=/d/backends/1/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.1 --xlator-option management.run-directory=/d/backends/1/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/1/glusterd/gd.sock --xlator-option management.cluster-test-mode=/var/log/glusterfs/1 --log-file=/var/log/glusterfs/1/reset-brick-and-daemons-follow-quorum.t_glusterd1.log --pid-file=/d/backends/1/glusterd.pid'
ok 17 [ 19/ 1496] < 61> 'Y shd_up_status_2'
ok
All tests successful.
Files=1, Tests=17, 19 wallclock secs ( 0.03 usr 0.01 sys + 1.41 cusr 0.90 csys = 2.35 CPU)
Result: PASS
Logs preserved in tarball reset-brick-and-daemons-follow-quorum-iteration-1.tar
End of test ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
================================================================================
================================================================================
[22:48:35] Running tests in file ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list