[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1647
jenkins at build.gluster.org
jenkins at build.gluster.org
Wed Feb 5 22:07:02 UTC 2020
See <https://build.gluster.org/job/regression-test-with-multiplex/1647/display/redirect>
Changes:
------------------------------------------
[...truncated 2.43 MB...]
Files=1, Tests=15, 24 wallclock secs ( 0.02 usr 0.01 sys + 1.05 cusr 0.70 csys = 1.78 CPU)
Result: PASS
Logs preserved in tarball rebalance-in-cluster-iteration-1.tar
End of test ./tests/bugs/glusterd/rebalance-in-cluster.t
================================================================================
================================================================================
[22:01:26] Running tests in file ./tests/bugs/glusterd/rebalance-operations-in-single-node.t
./tests/bugs/glusterd/rebalance-operations-in-single-node.t ..
1..48
ok 1 [ 281/ 1377] < 16> 'glusterd'
ok 2 [ 11/ 21] < 17> 'pidof glusterd'
No volumes present
ok 3 [ 11/ 197] < 19> 'gluster --mode=script --wignore volume info'
ok 4 [ 24/ 116] < 20> 'gluster --mode=script --wignore volume create StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest1 builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest2 builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest3 builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest4'
ok 5 [ 29/ 1517] < 21> 'gluster --mode=script --wignore volume start StartMigrationDuringRebalanceTest'
ok 6 [ 80/ 5884] < 24> 'gluster --mode=script --wignore volume rebalance StartMigrationDuringRebalanceTest start'
ok 7 [ 90/ 105] < 29> '! gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest1 status'
ok 8 [ 17/ 85] < 31> 'completed rebalance_status_field StartMigrationDuringRebalanceTest'
ok 9 [ 44/ 5225] < 33> 'gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest1 start'
ok 10 [ 11/ 144] < 34> '! gluster --mode=script --wignore volume rebalance StartMigrationDuringRebalanceTest start'
ok 11 [ 40/ 71] < 35> '! gluster --mode=script --wignore volume rebalance StartMigrationDuringRebalanceTest status'
ok 12 [ 12/ 69] < 36> '! gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest2 start'
ok 13 [ 11/ 103] < 38> 'completed remove_brick_status_completed_field StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest1'
ok 14 [ 11/ 180] < 40> 'gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest1 commit'
ok 15 [ 14/ 5089] < 42> 'gluster --mode=script --wignore volume rebalance StartMigrationDuringRebalanceTest start'
ok 16 [ 32/ 63] < 43> 'completed rebalance_status_field StartMigrationDuringRebalanceTest'
ok 17 [ 9/ 49] < 44> 'gluster --mode=script --wignore volume rebalance StartMigrationDuringRebalanceTest stop'
ok 18 [ 9/ 5080] < 46> 'gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest2 start'
ok 19 [ 37/ 78] < 47> 'gluster --mode=script --wignore volume remove-brick StartMigrationDuringRebalanceTest builder210.int.aws.gluster.org:/d/backends/StartMigrationDuringRebalanceTest2 stop'
ok 20 [ 9/ 81] < 51> 'gluster --mode=script --wignore volume create patchy builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3'
ok 21 [ 14/ 1194] < 52> 'gluster --mode=script --wignore volume start patchy'
ok 22 [ 12/ 55] < 55> 'glusterfs -s builder210.int.aws.gluster.org --volfile-id patchy /mnt/glusterfs/0'
ok 23 [ 12/ 62] < 56> 'mkdir /mnt/glusterfs/0/dir1 /mnt/glusterfs/0/dir2 /mnt/glusterfs/0/dir3 /mnt/glusterfs/0/dir4 /mnt/glusterfs/0/dir5 /mnt/glusterfs/0/dir6 /mnt/glusterfs/0/dir7 /mnt/glusterfs/0/dir8 /mnt/glusterfs/0/dir9 /mnt/glusterfs/0/dir10'
ok 24 [ 14/ 157] < 57> 'touch /mnt/glusterfs/0/dir1/file1 /mnt/glusterfs/0/dir1/file2 /mnt/glusterfs/0/dir1/file3 /mnt/glusterfs/0/dir1/file4 /mnt/glusterfs/0/dir1/file5 /mnt/glusterfs/0/dir1/file6 /mnt/glusterfs/0/dir1/file7 /mnt/glusterfs/0/dir1/file8 /mnt/glusterfs/0/dir1/file9 /mnt/glusterfs/0/dir1/file10 /mnt/glusterfs/0/dir2/file1 /mnt/glusterfs/0/dir2/file2 /mnt/glusterfs/0/dir2/file3 /mnt/glusterfs/0/dir2/file4 /mnt/glusterfs/0/dir2/file5 /mnt/glusterfs/0/dir2/file6 /mnt/glusterfs/0/dir2/file7 /mnt/glusterfs/0/dir2/file8 /mnt/glusterfs/0/dir2/file9 /mnt/glusterfs/0/dir2/file10 /mnt/glusterfs/0/dir3/file1 /mnt/glusterfs/0/dir3/file2 /mnt/glusterfs/0/dir3/file3 /mnt/glusterfs/0/dir3/file4 /mnt/glusterfs/0/dir3/file5 /mnt/glusterfs/0/dir3/file6 /mnt/glusterfs/0/dir3/file7 /mnt/glusterfs/0/dir3/file8 /mnt/glusterfs/0/dir3/file9 /mnt/glusterfs/0/dir3/file10 /mnt/glusterfs/0/dir4/file1 /mnt/glusterfs/0/dir4/file2 /mnt/glusterfs/0/dir4/file3 /mnt/glusterfs/0/dir4/file4 /mnt/glusterfs/0/dir4/file5 /mnt/glusterfs/0/dir4/file6 /mnt/glusterfs/0/dir4/file7 /mnt/glusterfs/0/dir4/file8 /mnt/glusterfs/0/dir4/file9 /mnt/glusterfs/0/dir4/file10 /mnt/glusterfs/0/dir5/file1 /mnt/glusterfs/0/dir5/file2 /mnt/glusterfs/0/dir5/file3 /mnt/glusterfs/0/dir5/file4 /mnt/glusterfs/0/dir5/file5 /mnt/glusterfs/0/dir5/file6 /mnt/glusterfs/0/dir5/file7 /mnt/glusterfs/0/dir5/file8 /mnt/glusterfs/0/dir5/file9 /mnt/glusterfs/0/dir5/file10 /mnt/glusterfs/0/dir6/file1 /mnt/glusterfs/0/dir6/file2 /mnt/glusterfs/0/dir6/file3 /mnt/glusterfs/0/dir6/file4 /mnt/glusterfs/0/dir6/file5 /mnt/glusterfs/0/dir6/file6 /mnt/glusterfs/0/dir6/file7 /mnt/glusterfs/0/dir6/file8 /mnt/glusterfs/0/dir6/file9 /mnt/glusterfs/0/dir6/file10 /mnt/glusterfs/0/dir7/file1 /mnt/glusterfs/0/dir7/file2 /mnt/glusterfs/0/dir7/file3 /mnt/glusterfs/0/dir7/file4 /mnt/glusterfs/0/dir7/file5 /mnt/glusterfs/0/dir7/file6 /mnt/glusterfs/0/dir7/file7 /mnt/glusterfs/0/dir7/file8 /mnt/glusterfs/0/dir7/file9 /mnt/glusterfs/0/dir7/file10 /mnt/glusterfs/0/dir8/file1 /mnt/glusterfs/0/dir8/file2 /mnt/glusterfs/0/dir8/file3 /mnt/glusterfs/0/dir8/file4 /mnt/glusterfs/0/dir8/file5 /mnt/glusterfs/0/dir8/file6 /mnt/glusterfs/0/dir8/file7 /mnt/glusterfs/0/dir8/file8 /mnt/glusterfs/0/dir8/file9 /mnt/glusterfs/0/dir8/file10 /mnt/glusterfs/0/dir9/file1 /mnt/glusterfs/0/dir9/file2 /mnt/glusterfs/0/dir9/file3 /mnt/glusterfs/0/dir9/file4 /mnt/glusterfs/0/dir9/file5 /mnt/glusterfs/0/dir9/file6 /mnt/glusterfs/0/dir9/file7 /mnt/glusterfs/0/dir9/file8 /mnt/glusterfs/0/dir9/file9 /mnt/glusterfs/0/dir9/file10 /mnt/glusterfs/0/dir10/file1 /mnt/glusterfs/0/dir10/file2 /mnt/glusterfs/0/dir10/file3 /mnt/glusterfs/0/dir10/file4 /mnt/glusterfs/0/dir10/file5 /mnt/glusterfs/0/dir10/file6 /mnt/glusterfs/0/dir10/file7 /mnt/glusterfs/0/dir10/file8 /mnt/glusterfs/0/dir10/file9 /mnt/glusterfs/0/dir10/file10'
ok 25 [ 11/ 274] < 60> 'gluster --mode=script --wignore volume add-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy4'
ok 26 [ 27/ 5113] < 61> 'gluster --mode=script --wignore volume rebalance patchy start'
ok 27 [ 83/ 2092] < 62> 'completed rebalance_status_field patchy'
ok 28 [ 535/ 2245] < 74> 'glusterd'
ok 29 [ 424/ 1] < 83> '[ 18 == 18 ]'
ok 30 [ 12/ 1] < 84> '[ 0Bytes == 0Bytes ]'
ok 31 [ 12/ 1] < 85> '[ 100 == 100 ]'
ok 32 [ 13/ 1] < 86> '[ 0 == 0 ]'
ok 33 [ 12/ 1] < 87> '[ 0 == 0 ]'
ok 34 [ 14/ 24] < 89> 'Y force_umount /mnt/glusterfs/0'
ok 35 [ 12/ 1144] < 93> 'gluster --mode=script --wignore volume start patchy force'
ok 36 [ 17/ 88] < 94> 'glusterfs -s builder210.int.aws.gluster.org --volfile-id patchy /mnt/glusterfs/0'
ok 37 [ 10996/ 424] < 108> 'gluster --mode=script --wignore volume add-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy5 builder210.int.aws.gluster.org:/d/backends/patchy6'
ok 38 [ 24/ 5266] < 111> 'gluster --mode=script --wignore volume rebalance patchy fix-layout start'
ok 39 [ 28/ 26682] < 113> 'fix-layout completed fix-layout_status_field patchy'
ok 40 [ 15/ 111] < 116> 'mkdir /mnt/glusterfs/0/dir21 /mnt/glusterfs/0/dir22 /mnt/glusterfs/0/dir23 /mnt/glusterfs/0/dir24 /mnt/glusterfs/0/dir25 /mnt/glusterfs/0/dir26 /mnt/glusterfs/0/dir27 /mnt/glusterfs/0/dir28 /mnt/glusterfs/0/dir29 /mnt/glusterfs/0/dir30'
ok 41 [ 44/ 197] < 117> 'touch /mnt/glusterfs/0/dir21/files1 /mnt/glusterfs/0/dir21/files2 /mnt/glusterfs/0/dir21/files3 /mnt/glusterfs/0/dir21/files4 /mnt/glusterfs/0/dir21/files5 /mnt/glusterfs/0/dir21/files6 /mnt/glusterfs/0/dir21/files7 /mnt/glusterfs/0/dir21/files8 /mnt/glusterfs/0/dir21/files9 /mnt/glusterfs/0/dir21/files10 /mnt/glusterfs/0/dir22/files1 /mnt/glusterfs/0/dir22/files2 /mnt/glusterfs/0/dir22/files3 /mnt/glusterfs/0/dir22/files4 /mnt/glusterfs/0/dir22/files5 /mnt/glusterfs/0/dir22/files6 /mnt/glusterfs/0/dir22/files7 /mnt/glusterfs/0/dir22/files8 /mnt/glusterfs/0/dir22/files9 /mnt/glusterfs/0/dir22/files10 /mnt/glusterfs/0/dir23/files1 /mnt/glusterfs/0/dir23/files2 /mnt/glusterfs/0/dir23/files3 /mnt/glusterfs/0/dir23/files4 /mnt/glusterfs/0/dir23/files5 /mnt/glusterfs/0/dir23/files6 /mnt/glusterfs/0/dir23/files7 /mnt/glusterfs/0/dir23/files8 /mnt/glusterfs/0/dir23/files9 /mnt/glusterfs/0/dir23/files10 /mnt/glusterfs/0/dir24/files1 /mnt/glusterfs/0/dir24/files2 /mnt/glusterfs/0/dir24/files3 /mnt/glusterfs/0/dir24/files4 /mnt/glusterfs/0/dir24/files5 /mnt/glusterfs/0/dir24/files6 /mnt/glusterfs/0/dir24/files7 /mnt/glusterfs/0/dir24/files8 /mnt/glusterfs/0/dir24/files9 /mnt/glusterfs/0/dir24/files10 /mnt/glusterfs/0/dir25/files1 /mnt/glusterfs/0/dir25/files2 /mnt/glusterfs/0/dir25/files3 /mnt/glusterfs/0/dir25/files4 /mnt/glusterfs/0/dir25/files5 /mnt/glusterfs/0/dir25/files6 /mnt/glusterfs/0/dir25/files7 /mnt/glusterfs/0/dir25/files8 /mnt/glusterfs/0/dir25/files9 /mnt/glusterfs/0/dir25/files10 /mnt/glusterfs/0/dir26/files1 /mnt/glusterfs/0/dir26/files2 /mnt/glusterfs/0/dir26/files3 /mnt/glusterfs/0/dir26/files4 /mnt/glusterfs/0/dir26/files5 /mnt/glusterfs/0/dir26/files6 /mnt/glusterfs/0/dir26/files7 /mnt/glusterfs/0/dir26/files8 /mnt/glusterfs/0/dir26/files9 /mnt/glusterfs/0/dir26/files10 /mnt/glusterfs/0/dir27/files1 /mnt/glusterfs/0/dir27/files2 /mnt/glusterfs/0/dir27/files3 /mnt/glusterfs/0/dir27/files4 /mnt/glusterfs/0/dir27/files5 /mnt/glusterfs/0/dir27/files6 /mnt/glusterfs/0/dir27/files7 /mnt/glusterfs/0/dir27/files8 /mnt/glusterfs/0/dir27/files9 /mnt/glusterfs/0/dir27/files10 /mnt/glusterfs/0/dir28/files1 /mnt/glusterfs/0/dir28/files2 /mnt/glusterfs/0/dir28/files3 /mnt/glusterfs/0/dir28/files4 /mnt/glusterfs/0/dir28/files5 /mnt/glusterfs/0/dir28/files6 /mnt/glusterfs/0/dir28/files7 /mnt/glusterfs/0/dir28/files8 /mnt/glusterfs/0/dir28/files9 /mnt/glusterfs/0/dir28/files10 /mnt/glusterfs/0/dir29/files1 /mnt/glusterfs/0/dir29/files2 /mnt/glusterfs/0/dir29/files3 /mnt/glusterfs/0/dir29/files4 /mnt/glusterfs/0/dir29/files5 /mnt/glusterfs/0/dir29/files6 /mnt/glusterfs/0/dir29/files7 /mnt/glusterfs/0/dir29/files8 /mnt/glusterfs/0/dir29/files9 /mnt/glusterfs/0/dir29/files10 /mnt/glusterfs/0/dir30/files1 /mnt/glusterfs/0/dir30/files2 /mnt/glusterfs/0/dir30/files3 /mnt/glusterfs/0/dir30/files4 /mnt/glusterfs/0/dir30/files5 /mnt/glusterfs/0/dir30/files6 /mnt/glusterfs/0/dir30/files7 /mnt/glusterfs/0/dir30/files8 /mnt/glusterfs/0/dir30/files9 /mnt/glusterfs/0/dir30/files10'
ok 42 [ 13/ 468] < 119> 'gluster --mode=script --wignore volume add-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy7 builder210.int.aws.gluster.org:/d/backends/patchy8'
ok 43 [ 201/ 5374] < 121> 'gluster --mode=script --wignore volume rebalance patchy start force'
ok 44 [ 172/ 64556] < 122> 'completed rebalance_status_field patchy'
ok 45 [ 13/ 48] < 124> 'pkill gluster'
ok 46 [ 23/ 1598] < 125> 'glusterd'
ok 47 [ 32/ 41] < 126> 'pidof glusterd'
ok 48 [ 19/ 85] < 129> 'completed rebalance_status_field patchy'
ok
All tests successful.
Files=1, Tests=48, 158 wallclock secs ( 0.03 usr 0.00 sys + 12.24 cusr 8.05 csys = 20.32 CPU)
Result: PASS
Logs preserved in tarball rebalance-operations-in-single-node-iteration-1.tar
End of test ./tests/bugs/glusterd/rebalance-operations-in-single-node.t
================================================================================
================================================================================
[22:04:04] Running tests in file ./tests/bugs/glusterd/remove-brick-in-cluster.t
./tests/bugs/glusterd/remove-brick-in-cluster.t ..
1..21
ok 1 [ 236/ 2418] < 8> 'launch_cluster 2'
ok 2 [ 14/ 173] < 11> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy replica 2 127.1.1.1:/d/backends/1/patchy1 127.1.1.1:/d/backends/1/patchy2 127.1.1.1:/d/backends/1/patchy3 127.1.1.1:/d/backends/1/patchy4'
ok 3 [ 12/ 2279] < 12> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy'
ok 4 [ 31/ 156] < 14> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log peer probe 127.1.1.2'
ok 5 [ 17/ 1343] < 15> '1 peer_count'
ok 6 [ 11/ 5179] < 17> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli2.log volume remove-brick patchy 127.1.1.1:/d/backends/1/patchy3 127.1.1.1:/d/backends/1/patchy4 start'
ok 7 [ 28/ 94] < 18> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/2/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli2.log volume info'
ok 8 [ 11/ 143] < 21> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy1 127.1.1.1:/d/backends/1/patchy10 127.1.1.2:/d/backends/2/patchy11'
ok 9 [ 16/ 306] < 22> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy1'
ok 10 [ 11/ 5152] < 23> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy1 127.1.1.2:/d/backends/2/patchy11 start'
ok 11 [ 29/ 72] < 24> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume status'
ok 12 [ 11/ 3128] < 26> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume stop patchy'
ok 13 [ 10/ 64] < 27> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume delete patchy'
ok 14 [ 10/ 205] < 31> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume create patchy replica 3 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick4 127.1.1.1:/d/backends/1/brick5 127.1.1.2:/d/backends/2/brick6'
ok 15 [ 11/ 3283] < 36> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume start patchy'
ok 16 [ 29/ 269] < 39> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 2 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick6 force'
ok 17 [ 47/ 227] < 42> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick2 force'
ok 18 [ 15/ 3115] < 45> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume remove-brick patchy replica 1 127.1.1.1:/d/backends/1/brick5 force'
ok 19 [ 12/ 2286] < 51> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 2 127.1.1.1:/d/backends/1/brick5 force'
ok 20 [ 12/ 260] < 54> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 2 127.1.1.1:/d/backends/1/brick3 127.1.1.2:/d/backends/2/brick2 force'
ok 21 [ 14/ 532] < 57> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/remove-brick-in-cluster.t_cli1.log volume add-brick patchy replica 3 127.1.1.1:/d/backends/1/brick1 127.1.1.2:/d/backends/2/brick6 force'
ok
All tests successful.
Files=1, Tests=21, 32 wallclock secs ( 0.02 usr 0.01 sys + 1.34 cusr 0.84 csys = 2.21 CPU)
Result: PASS
Logs preserved in tarball remove-brick-in-cluster-iteration-1.tar
End of test ./tests/bugs/glusterd/remove-brick-in-cluster.t
================================================================================
================================================================================
[22:04:36] Running tests in file ./tests/bugs/glusterd/remove-brick-testcases.t
./tests/bugs/glusterd/remove-brick-testcases.t ..
1..35
ok 1 [ 216/ 1313] < 20> 'glusterd'
ok 2 [ 12/ 9] < 21> 'pidof glusterd'
ok 3 [ 10/ 258] < 23> 'gluster --mode=script --wignore volume create patchy builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 builder210.int.aws.gluster.org:/d/backends/patchy4 builder210.int.aws.gluster.org:/d/backends/patchy5'
ok 4 [ 51/ 1509] < 24> 'gluster --mode=script --wignore volume start patchy'
OK
ok 5 [ 2267/ 207] < 29> '0 brick_up_status patchy builder210.int.aws.gluster.org /d/backends/patchy1'
ok 6 [ 78/ 137] < 32> '! gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy1 start'
ok 7 [ 84/ 520] < 34> 'gluster --mode=script --wignore volume start patchy force'
ok 8 [ 78/ 404] < 35> '1 brick_up_status patchy builder210.int.aws.gluster.org /d/backends/patchy1'
ok 9 [ 283/ 5527] < 38> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy1 start'
ok 10 [ 183/ 830] < 40> 'completed remove_brick_status_completed_field patchy builder210.int.aws.gluster.org:/d/backends/patchy1'
OK
ok 11 [ 2392/ 397] < 46> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy1 commit'
ok 12 [ 72/ 5629] < 50> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 start'
ok 13 [ 80/ 127] < 54> '! gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/ABCD status'
ok 14 [ 12/ 165] < 55> '! gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/ABCD stop'
ok 15 [ 12/ 81] < 60> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 status'
ok 16 [ 83/ 258] < 61> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 stop'
ok 17 [ 11/ 181] < 64> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 force'
ok 18 [ 39/ 136] < 65> '3 brick_count patchy'
ok 19 [ 11/ 163] < 67> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy3 force'
ok 20 [ 12/ 144] < 68> '2 brick_count patchy'
ok 21 [ 29/ 87] < 76> 'decommissioned remove_brick_commit_status'
ok 22 [ 81/ 1124] < 78> 'gluster --mode=script --wignore volume stop patchy'
ok 23 [ 20/ 3961] < 79> 'gluster --mode=script --wignore volume delete patchy'
ok 24 [ 18/ 182] < 82> 'gluster --mode=script --wignore volume create patchy replica 3 builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 builder210.int.aws.gluster.org:/d/backends/patchy4 builder210.int.aws.gluster.org:/d/backends/patchy5 builder210.int.aws.gluster.org:/d/backends/patchy6'
ok 25 [ 35/ 2505] < 83> 'gluster --mode=script --wignore volume start patchy'
ok 26 [ 85/ 259] < 90> 'failed remove_brick_start_status'
ok 27 [ 270/ 323] < 98> 'decommissioned remove_brick_commit_status2'
ok 28 [ 374/ 251] < 99> 'gluster --mode=script --wignore volume info patchy'
ok 29 [ 88/ 394] < 108> 'success remove_brick_status'
ok 30 [ 206/ 253] < 109> 'gluster --mode=script --wignore volume info patchy'
ok 31 [ 133/ 5482] < 113> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy4 builder210.int.aws.gluster.org:/d/backends/patchy5 start'
ok 32 [ 192/ 1512] < 114> 'completed remove_brick_status_completed_field patchy builder210.int.aws.gluster.org:/d/backends/patchy5'
ok 33 [ 10/ 76] < 115> 'completed remove_brick_status_completed_field patchy builder210.int.aws.gluster.org:/d/backends/patchy4'
ok 34 [ 25/ 329] < 116> 'gluster --mode=script --wignore volume remove-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy4 builder210.int.aws.gluster.org:/d/backends/patchy5 commit'
ok 35 [ 110/ 1420] < 117> 'gluster --mode=script --wignore volume remove-brick patchy replica 1 builder210.int.aws.gluster.org:/d/backends/patchy2 force'
ok
All tests successful.
Files=1, Tests=35, 46 wallclock secs ( 0.03 usr 0.00 sys + 2.25 cusr 1.35 csys = 3.63 CPU)
Result: PASS
Logs preserved in tarball remove-brick-testcases-iteration-1.tar
End of test ./tests/bugs/glusterd/remove-brick-testcases.t
================================================================================
================================================================================
[22:05:22] Running tests in file ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t ..
1..21
ok 1 [ 1316/ 1963] < 9> 'glusterd'
ok 2 [ 74/ 59] < 10> 'pidof glusterd'
No volumes present
ok 3 [ 222/ 450] < 11> 'gluster --mode=script --wignore volume info'
ok 4 [ 120/ 847] < 14> 'gluster --mode=script --wignore volume create patchy replica 2 builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 builder210.int.aws.gluster.org:/d/backends/patchy4 builder210.int.aws.gluster.org:/d/backends/patchy5 builder210.int.aws.gluster.org:/d/backends/patchy6'
ok 5 [ 84/ 2139] < 15> 'gluster --mode=script --wignore volume start patchy'
ok 6 [ 132/ 319] < 19> 'glusterfs -s builder210.int.aws.gluster.org --volfile-id patchy /mnt/glusterfs/0'
ok 7 [ 115/ 793] < 20> 'touch /mnt/glusterfs/0/file1 /mnt/glusterfs/0/file2 /mnt/glusterfs/0/file3 /mnt/glusterfs/0/file4 /mnt/glusterfs/0/file5 /mnt/glusterfs/0/file6 /mnt/glusterfs/0/file7 /mnt/glusterfs/0/file8 /mnt/glusterfs/0/file9 /mnt/glusterfs/0/file10'
ok 8 [ 21/ 5777] < 29> 'success remove_brick_start_status'
ok 9 [ 342/ 2183] < 32> 'completed remove_brick_status_completed_field patchy builder210.int.aws.gluster.org:/d/backends/patchy6 builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy5'
ok 10 [ 95/ 467] < 40> 'success remove_brick_commit_status'
ok 11 [ 660/ 101] < 44> 'Replicate echo Replicate'
ok 12 [ 501/ 453] < 45> 'Y force_umount /mnt/glusterfs/0'
ok 13 [ 144/ 1798] < 50> 'gluster --mode=script --wignore volume create patchy1 replica 3 builder210.int.aws.gluster.org:/d/backends/patchy10 builder210.int.aws.gluster.org:/d/backends/patchy11 builder210.int.aws.gluster.org:/d/backends/patchy12 builder210.int.aws.gluster.org:/d/backends/patchy13 builder210.int.aws.gluster.org:/d/backends/patchy14 builder210.int.aws.gluster.org:/d/backends/patchy15 builder210.int.aws.gluster.org:/d/backends/patchy16 builder210.int.aws.gluster.org:/d/backends/patchy17 builder210.int.aws.gluster.org:/d/backends/patchy18'
ok 14 [ 71/ 2784] < 51> 'gluster --mode=script --wignore volume start patchy1'
ok 15 [ 71/ 577] < 54> 'glusterfs -s builder210.int.aws.gluster.org --volfile-id patchy1 /mnt/glusterfs/0'
ok 16 [ 95/ 35] < 55> 'touch /mnt/glusterfs/0/zerobytefile.txt'
ok 17 [ 104/ 119] < 56> 'mkdir /mnt/glusterfs/0/test_dir'
ok 18 [ 77/ 240] < 57> 'dd if=/dev/zero of=/mnt/glusterfs/0/file bs=1024 count=1024'
ok 19 [ 76/ 276] < 70> 'failed remove_brick_start'
ok 20 [ 746/ 2239] < 75> 'success remove_brick'
ok 21 [ 171/ 222] < 77> 'Y force_umount /mnt/glusterfs/0'
ok
All tests successful.
Files=1, Tests=21, 31 wallclock secs ( 0.03 usr 0.00 sys + 1.21 cusr 1.04 csys = 2.28 CPU)
Result: PASS
Logs preserved in tarball removing-multiple-bricks-in-single-remove-brick-command-iteration-1.tar
End of test ./tests/bugs/glusterd/removing-multiple-bricks-in-single-remove-brick-command.t
================================================================================
================================================================================
[22:05:54] Running tests in file ./tests/bugs/glusterd/replace-brick-operations.t
./tests/bugs/glusterd/replace-brick-operations.t ..
1..14
ok 1 [ 986/ 1403] < 11> 'glusterd'
ok 2 [ 11/ 17] < 12> 'pidof glusterd'
ok 3 [ 11/ 272] < 15> 'gluster --mode=script --wignore volume create patchy replica 2 builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy2'
ok 4 [ 89/ 2738] < 16> 'gluster --mode=script --wignore volume start patchy'
ok 5 [ 166/ 286] < 24> '! gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 start'
ok 6 [ 122/ 252] < 25> '! gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 status'
ok 7 [ 77/ 175] < 26> '! gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 abort'
ok 8 [ 76/ 3838] < 30> 'gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy2 builder210.int.aws.gluster.org:/d/backends/patchy3 commit force'
ok 9 [ 30/ 114] < 34> 'glusterfs --volfile-id=patchy --volfile-server=builder210.int.aws.gluster.org /mnt/glusterfs/0'
ok 10 [ 25/ 2262] < 37> 'gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy1 builder210.int.aws.gluster.org:/d/backends/patchy1_new commit force'
ok 11 [ 68/ 237] < 39> '1 afr_child_up_status patchy 1'
ok 12 [ 12/ 2101] < 41> 'kill_brick patchy builder210.int.aws.gluster.org /d/backends/patchy1_new'
ok 13 [ 12/ 2222] < 44> 'gluster --mode=script --wignore volume replace-brick patchy builder210.int.aws.gluster.org:/d/backends/patchy1_new builder210.int.aws.gluster.org:/d/backends/patchy1_newer commit force'
ok 14 [ 11/ 442] < 46> '1 afr_child_up_status patchy 1'
ok
All tests successful.
Files=1, Tests=14, 18 wallclock secs ( 0.02 usr 0.00 sys + 0.96 cusr 0.72 csys = 1.70 CPU)
Result: PASS
Logs preserved in tarball replace-brick-operations-iteration-1.tar
End of test ./tests/bugs/glusterd/replace-brick-operations.t
================================================================================
================================================================================
[22:06:12] Running tests in file ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t ..
1..17
ok 1 [ 224/ 4077] < 24> 'launch_cluster 3'
ok 2 [ 11/ 132] < 25> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.2'
ok 3 [ 18/ 88] < 26> '1 check_peers'
ok 4 [ 12/ 138] < 28> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume create patchy replica 2 127.1.1.1:/d/backends/patchy 127.1.1.2:/d/backends/patchy'
ok 5 [ 12/ 2506] < 29> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume start patchy'
ok 6 [ 17/ 83] < 33> '! gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok 7 [ 12/ 1070] < 35> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy start'
ok 8 [ 11/ 4238] < 37> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log volume reset-brick patchy 127.1.1.1:/d/backends/patchy 127.1.1.1:/d/backends/patchy commit force'
ok 9 [ 56/ 120] < 41> 'gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/var/log/glusterfs/reset-brick-and-daemons-follow-quorum.t_cli1.log peer probe 127.1.1.3'
ok 10 [ 12/ 1373] < 42> '2 peer_count'
ok 11 [ 13/ 78] < 44> '1 cluster_brick_up_status 1 patchy 127.1.1.1 /d/backends/patchy'
ok 12 [ 11/ 80] < 45> '1 cluster_brick_up_status 1 patchy 127.1.1.2 /d/backends/patchy'
ok 13 [ 11/ 78] < 46> 'Y shd_up_status_1'
ok 14 [ 11/ 82] < 47> 'Y shd_up_status_2'
ok 15 [ 96/ 3] < 53> 'kill_glusterd 1'
ok 16 [ 12/ 1510] < 56> 'glusterd --xlator-option management.working-directory=/d/backends/1/glusterd --xlator-option management.transport.socket.bind-address=127.1.1.1 --xlator-option management.run-directory=/d/backends/1/run/gluster --xlator-option management.glusterd-sockfile=/d/backends/1/glusterd/gd.sock --xlator-option management.cluster-test-mode=/var/log/glusterfs/1 --log-file=/var/log/glusterfs/1/reset-brick-and-daemons-follow-quorum.t_glusterd1.log --pid-file=/d/backends/1/glusterd.pid'
ok 17 [ 28/ 1758] < 61> 'Y shd_up_status_2'
ok
All tests successful.
Files=1, Tests=17, 18 wallclock secs ( 0.02 usr 0.00 sys + 1.38 cusr 0.90 csys = 2.30 CPU)
Result: PASS
Logs preserved in tarball reset-brick-and-daemons-follow-quorum-iteration-1.tar
End of test ./tests/bugs/glusterd/reset-brick-and-daemons-follow-quorum.t
================================================================================
================================================================================
[22:06:31] Running tests in file ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
Process leaked file descriptors. See https://jenkins.io/redirect/troubleshooting/process-leaked-file-descriptors for more information
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list