[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #453

jenkins at build.gluster.org jenkins at build.gluster.org
Thu Nov 16 18:26:50 UTC 2017


See <https://build.gluster.org/job/netbsd-periodic/453/display/redirect?page=changes>

Changes:

[R.Shyamsundar] tier: coverity fix for tier-common.c

------------------------------------------
[...truncated 317.54 KB...]
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/ec1
Brick2: 127.1.1.2:/d/backends/1/ec2
Brick3: 127.1.1.3:/d/backends/1/ec3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 30, LINENUM:114
ok 31, LINENUM:116
ok 32, LINENUM:117
ok 33, LINENUM:120
ok 34, LINENUM:123
ok 35, LINENUM:129
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               78        0Bytes           500             0             0          in progress        0:05:07
                               127.1.1.2               74        0Bytes           507             0             0            completed        0:05:05
                               127.1.1.3               76        0Bytes           502             0             0          in progress        0:05:07
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 36, LINENUM:137
not ok 37 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 38 , LINENUM:139
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
Failed 2/38 subtests 

Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 0 Tests: 38 Failed: 2)
  Failed tests:  37-38
Files=1, Tests=38, 1657 wallclock secs ( 0.06 usr  0.01 sys + 21.83 cusr 760.58 csys = 782.48 CPU)
Result: FAIL
./tests/basic/distribute/rebal-all-nodes-migrate.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

volume stop: patchy: failed: rebalance session is in progress for the volume 'patchy'
volume delete: patchy: failed: Volume patchy has been started.Volume needs to be stopped before deletion.
volume create: patchy: failed: Volume patchy already exists
volume start: patchy: failed: Volume patchy already started
volume add-brick: failed: Incorrect number of bricks supplied 3 with count 2
volume rebalance: patchy: failed: error
./tests/basic/distribute/rebal-all-nodes-migrate.t .. 
1..38
ok 1, LINENUM:28
ok 2, LINENUM:29
ok 3, LINENUM:30
ok 4, LINENUM:31
ok 5, LINENUM:35
ok 6, LINENUM:37
 
Volume Name: patchy
Type: Distribute
Volume ID: 8574334f-84b0-44e0-bc44-c667a1392567
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/dist1
Brick2: 127.1.1.1:/d/backends/1/dist2
Brick3: 127.1.1.2:/d/backends/2/dist3
Brick4: 127.1.1.2:/d/backends/2/dist4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 7, LINENUM:43
ok 8, LINENUM:45
ok 9, LINENUM:46
ok 10, LINENUM:49
ok 11, LINENUM:52
ok 12, LINENUM:58
ok 13, LINENUM:59
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               68        0Bytes           229             0             0            completed        0:01:00
                               127.1.1.2               96        0Bytes           271             0             0            completed        0:01:09
volume rebalance: patchy: success
ok 14, LINENUM:63
ok 15, LINENUM:64
ok 16, LINENUM:65
ok 17, LINENUM:71
ok 18, LINENUM:73
 
Volume Name: patchy
Type: Distributed-Replicate
Volume ID: 4c05bd3f-3408-4c90-8489-0396f6ab082d
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/drep1
Brick2: 127.1.1.2:/d/backends/2/drep1
Brick3: 127.1.1.1:/d/backends/1/drep2
Brick4: 127.1.1.2:/d/backends/2/drep2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
ok 19, LINENUM:79
ok 20, LINENUM:81
ok 21, LINENUM:82
ok 22, LINENUM:85
ok 23, LINENUM:88
not ok 24 Got "1" instead of "0", LINENUM:94
FAILED COMMAND: 0 cluster_rebalance_completed
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              114        0Bytes           500             0             0          in progress        0:06:00
                               127.1.1.2              112        0Bytes           501             0             0          in progress        0:06:00
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 25, LINENUM:99
not ok 26 , LINENUM:100
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 27 , LINENUM:101
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
not ok 28 , LINENUM:106
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume create patchy disperse 3 127.1.1.1:/d/backends/1/ec1 127.1.1.2:/d/backends/1/ec2 127.1.1.3:/d/backends/1/ec3 force
not ok 29 , LINENUM:108
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume start patchy
 
Volume Name: patchy
Type: Distributed-Replicate
Volume ID: 4c05bd3f-3408-4c90-8489-0396f6ab082d
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/drep1
Brick2: 127.1.1.2:/d/backends/2/drep1
Brick3: 127.1.1.1:/d/backends/1/drep2
Brick4: 127.1.1.2:/d/backends/2/drep2
Brick5: 127.1.1.1:/d/backends/1/drep3
Brick6: 127.1.1.2:/d/backends/2/drep3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
ok 30, LINENUM:114
not ok 31 , LINENUM:116
FAILED COMMAND: mkdir /mnt/glusterfs/0/dir1
ok 32, LINENUM:117
not ok 33 , LINENUM:120
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume add-brick patchy 127.1.1.1:/d/backends/2/ec4 127.1.1.2:/d/backends/2/ec5 127.1.1.3:/d/backends/2/ec6
ok 34, LINENUM:123
ok 35, LINENUM:129
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes           500             0             0            completed        0:01:46
                               127.1.1.2                0        0Bytes           500             0             0            completed        0:01:42
volume rebalance: patchy: success
ok 36, LINENUM:137
ok 37, LINENUM:138
ok 38, LINENUM:139
Failed 7/38 subtests 

Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 0 Tests: 38 Failed: 7)
  Failed tests:  24, 26-29, 31, 33
Files=1, Tests=38, 2010 wallclock secs ( 0.03 usr  0.02 sys + 22.71 cusr 854.76 csys = 877.52 CPU)
Result: FAIL
End of test ./tests/basic/distribute/rebal-all-nodes-migrate.t
================================================================================


Run complete
================================================================================
Number of tests found:                             49
Number of tests selected for run based on pattern: 49
Number of tests skipped as they were marked bad:   2
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     47

Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/distribute/rebal-all-nodes-migrate.t  -  1657 second
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t  -  314 second
./tests/basic/afr/split-brain-favorite-child-policy.t  -  275 second
./tests/basic/afr/lk-quorum.t  -  261 second
./tests/basic/afr/self-heal.t  -  143 second
./tests/basic/afr/self-heald.t  -  142 second
./tests/basic/afr/sparse-file-self-heal.t  -  130 second
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t  -  122 second
./tests/basic/afr/entry-self-heal.t  -  112 second
./tests/basic/afr/split-brain-heal-info.t  -  95 second
./tests/basic/afr/metadata-self-heal.t  -  64 second
./tests/basic/afr/inodelk.t  -  60 second
./tests/basic/afr/split-brain-healing.t  -  59 second
./tests/basic/afr/quorum.t  -  52 second
./tests/basic/afr/arbiter-add-brick.t  -  49 second
./tests/basic/afr/arbiter.t  -  44 second
./tests/basic/afr/gfid-self-heal.t  -  43 second
./tests/basic/afr/granular-esh/conservative-merge.t  -  40 second
./tests/basic/afr/arbiter-mount.t  -  38 second
./tests/basic/afr/durability-off.t  -  29 second
./tests/basic/afr/split-brain-resolution.t  -  28 second
./tests/basic/afr/split-brain-open.t  -  28 second
./tests/basic/afr/granular-esh/granular-esh.t  -  25 second
./tests/basic/afr/arbiter-remove-brick.t  -  24 second
./tests/basic/afr/granular-esh/replace-brick.t  -  23 second
./tests/basic/afr/add-brick-self-heal.t  -  23 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t  -  23 second
./tests/basic/afr/replace-brick-self-heal.t  -  23 second
./tests/basic/afr/read-subvol-data.t  -  22 second
./tests/basic/afr/resolve.t  -  21 second
./tests/basic/afr/heal-quota.t  -  20 second
./tests/basic/afr/granular-esh/add-brick.t  -  20 second
./tests/basic/afr/client-side-heal.t  -  19 second
./tests/basic/afr/data-self-heal.t  -  18 second
./tests/basic/afr/root-squash-self-heal.t  -  18 second
./tests/basic/afr/heal-info.t  -  16 second
./tests/basic/afr/stale-file-lookup.t  -  16 second
./tests/basic/afr/read-subvol-entry.t  -  16 second
./tests/basic/afr/gfid-heal.t  -  15 second
./tests/basic/cdc.t  -  14 second
./tests/basic/afr/compounded-write-txns.t  -  11 second
./tests/basic/afr/arbiter-statfs.t  -  11 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  10 second
./tests/basic/afr/gfid-mismatch.t  -  10 second
./tests/basic/afr/arbiter-cli.t  -  4 second
./tests/basic/0symbol-check.t  -  0 second
./tests/basic/bd.t  -  0 second

1 test(s) failed 
./tests/basic/distribute/rebal-all-nodes-migrate.t

0 test(s) generated core 


Result is 1

tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20171116163439.tgz
error: fatal: change is closed

fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list