[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #250
jenkins at build.gluster.org
jenkins at build.gluster.org
Sat Sep 9 18:43:41 UTC 2017
See <https://build.gluster.org/job/netbsd-periodic/250/display/redirect>
------------------------------------------
[...truncated 301.02 KB...]
Bricks:
Brick1: 127.1.1.1:/d/backends/1/ec1
Brick2: 127.1.1.2:/d/backends/1/ec2
Brick3: 127.1.1.3:/d/backends/1/ec3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 30, LINENUM:114
ok 31, LINENUM:116
ok 32, LINENUM:117
ok 33, LINENUM:120
ok 34, LINENUM:123
ok 35, LINENUM:129
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 89 0Bytes 500 0 0 completed 0:02:44
127.1.1.2 73 0Bytes 508 0 0 in progress 0:02:45
127.1.1.3 69 0Bytes 500 0 0 completed 0:02:42
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 36, LINENUM:137
not ok 37 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 38 , LINENUM:139
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
Failed 2/38 subtests
Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 0 Tests: 38 Failed: 2)
Failed tests: 37-38
Files=1, Tests=38, 1278 wallclock secs ( 0.04 usr 0.02 sys + 26.46 cusr 640.81 csys = 667.33 CPU)
Result: FAIL
./tests/basic/distribute/rebal-all-nodes-migrate.t: bad status 1
*********************************
* REGRESSION FAILED *
* Retrying failed tests in case *
* we got some spurious failures *
*********************************
volume stop: patchy: failed: rebalance session is in progress for the volume 'patchy'
volume delete: patchy: failed: Volume patchy has been started.Volume needs to be stopped before deletion.
rm: /d/backends/1/glusterd/vols/patchy: Directory not empty
rm: /d/backends/1/glusterd/vols: Directory not empty
rm: /d/backends/1/glusterd: Directory not empty
rm: /d/backends/1: Directory not empty
rm: /d/backends: Directory not empty
Aborting.
/mnt/nfs/1 could not be deleted, here are the left over items
drwxr-xr-x 3 root wheel 512 Sep 9 18:39 /d/backends
drwxr-xr-x 3 root wheel 512 Sep 9 18:20 /d/backends/1
drwxr-xr-x 3 root wheel 512 Sep 9 17:54 /d/backends/1/glusterd
drwxr-xr-x 3 root wheel 512 Sep 9 17:47 /d/backends/1/glusterd/vols
drwxr-xr-x 2 root wheel 1024 Sep 9 17:54 /d/backends/1/glusterd/vols/patchy
-rw------- 1 root wheel 174 Sep 9 17:54 /d/backends/1/glusterd/vols/patchy/node_state.info
Please correct the problem and try again.
./tests/basic/distribute/rebal-all-nodes-migrate.t ..
1..38
ok 1, LINENUM:28
ok 2, LINENUM:29
ok 3, LINENUM:30
ok 4, LINENUM:31
ok 5, LINENUM:35
ok 6, LINENUM:37
Volume Name: patchy
Type: Distribute
Volume ID: 340b2225-d37c-4a5f-9f20-08fc91a042e7
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/dist1
Brick2: 127.1.1.1:/d/backends/1/dist2
Brick3: 127.1.1.2:/d/backends/2/dist3
Brick4: 127.1.1.2:/d/backends/2/dist4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 7, LINENUM:43
ok 8, LINENUM:45
ok 9, LINENUM:46
ok 10, LINENUM:49
ok 11, LINENUM:52
ok 12, LINENUM:58
ok 13, LINENUM:59
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 68 0Bytes 229 0 0 completed 0:00:39
127.1.1.2 185 0Bytes 272 0 0 completed 0:01:11
volume rebalance: patchy: success
ok 14, LINENUM:63
ok 15, LINENUM:64
ok 16, LINENUM:65
ok 17, LINENUM:71
ok 18, LINENUM:73
Volume Name: patchy
Type: Distributed-Replicate
Volume ID: 865b9652-2b6c-44bf-badf-4f23193b31fc
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/drep1
Brick2: 127.1.1.2:/d/backends/2/drep1
Brick3: 127.1.1.1:/d/backends/1/drep2
Brick4: 127.1.1.2:/d/backends/2/drep2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 19, LINENUM:79
ok 20, LINENUM:81
ok 21, LINENUM:82
ok 22, LINENUM:85
ok 23, LINENUM:88
ok 24, LINENUM:94
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 137 0Bytes 501 0 0 completed 0:04:46
127.1.1.2 124 0Bytes 500 0 0 completed 0:04:38
volume rebalance: patchy: success
ok 25, LINENUM:99
ok 26, LINENUM:100
ok 27, LINENUM:101
ok 28, LINENUM:106
ok 29, LINENUM:108
Volume Name: patchy
Type: Disperse
Volume ID: abdf1e1f-219d-4c4b-bf06-c7936fd3acf9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/ec1
Brick2: 127.1.1.2:/d/backends/1/ec2
Brick3: 127.1.1.3:/d/backends/1/ec3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 30, LINENUM:114
ok 31, LINENUM:116
ok 32, LINENUM:117
ok 33, LINENUM:120
ok 34, LINENUM:123
not ok 35 Got "1" instead of "0", LINENUM:129
FAILED COMMAND: 0 cluster_rebalance_completed
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 72 0Bytes 500 0 0 in progress 0:06:01
127.1.1.2 67 0Bytes 500 0 0 in progress 0:06:01
127.1.1.3 69 0Bytes 506 0 0 in progress 0:06:01
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 36, LINENUM:137
not ok 37 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 38 , LINENUM:139
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
Dubious, test returned 1 (wstat 256, 0x100)
Failed 3/38 subtests
Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 256 Tests: 38 Failed: 3)
Failed tests: 35, 37-38
Non-zero exit status: 1
Files=1, Tests=38, 3624 wallclock secs ( 0.05 usr 0.02 sys + 48.53 cusr 2058.57 csys = 2107.17 CPU)
Result: FAIL
End of test ./tests/basic/distribute/rebal-all-nodes-migrate.t
================================================================================
Run complete
================================================================================
Number of tests found: 48
Number of tests selected for run based on pattern: 48
Number of tests skipped as they were marked bad: 2
Number of tests skipped because of known_issues: 0
Number of tests that were run: 46
Tests ordered by time taken, slowest to fastest:
================================================================================
./tests/basic/distribute/rebal-all-nodes-migrate.t - 1278 second
./tests/basic/afr/lk-quorum.t - 263 second
./tests/basic/afr/split-brain-favorite-child-policy.t - 248 second
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t - 165 second
./tests/basic/afr/self-heald.t - 160 second
./tests/basic/afr/self-heal.t - 151 second
./tests/basic/afr/sparse-file-self-heal.t - 148 second
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t - 118 second
./tests/basic/afr/entry-self-heal.t - 102 second
./tests/basic/afr/split-brain-heal-info.t - 90 second
./tests/basic/afr/metadata-self-heal.t - 74 second
./tests/basic/afr/inodelk.t - 65 second
./tests/basic/afr/quorum.t - 63 second
./tests/basic/afr/split-brain-healing.t - 58 second
./tests/basic/afr/arbiter.t - 53 second
./tests/basic/afr/arbiter-add-brick.t - 47 second
./tests/basic/afr/durability-off.t - 43 second
./tests/basic/afr/gfid-self-heal.t - 42 second
./tests/basic/afr/granular-esh/conservative-merge.t - 42 second
./tests/basic/afr/arbiter-mount.t - 35 second
./tests/basic/afr/split-brain-resolution.t - 32 second
./tests/basic/afr/heal-quota.t - 31 second
./tests/basic/afr/granular-esh/replace-brick.t - 28 second
./tests/basic/afr/granular-esh/granular-esh.t - 28 second
./tests/basic/afr/arbiter-remove-brick.t - 27 second
./tests/basic/afr/replace-brick-self-heal.t - 27 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t - 26 second
./tests/basic/afr/client-side-heal.t - 25 second
./tests/basic/afr/resolve.t - 25 second
./tests/basic/afr/granular-esh/add-brick.t - 25 second
./tests/basic/afr/add-brick-self-heal.t - 24 second
./tests/basic/afr/data-self-heal.t - 22 second
./tests/basic/cdc.t - 22 second
./tests/basic/afr/stale-file-lookup.t - 22 second
./tests/basic/afr/heal-info.t - 21 second
./tests/basic/afr/read-subvol-data.t - 21 second
./tests/basic/afr/root-squash-self-heal.t - 20 second
./tests/basic/afr/read-subvol-entry.t - 19 second
./tests/basic/afr/gfid-heal.t - 18 second
./tests/basic/afr/compounded-write-txns.t - 14 second
./tests/basic/distribute/bug-1265677-use-readdirp.t - 14 second
./tests/basic/afr/arbiter-statfs.t - 14 second
./tests/basic/afr/gfid-mismatch.t - 13 second
./tests/basic/afr/arbiter-cli.t - 8 second
./tests/basic/bd.t - 1 second
./tests/basic/0symbol-check.t - 0 second
1 test(s) failed
./tests/basic/distribute/rebal-all-nodes-migrate.t
0 test(s) generated core
Result is 1
tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20170909163616.tgz
Build step 'Execute shell' marked build as failure
More information about the maintainers
mailing list