[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #483

jenkins at build.gluster.org jenkins at build.gluster.org
Thu Dec 7 15:04:22 UTC 2017


See <https://build.gluster.org/job/netbsd-periodic/483/display/redirect?page=changes>

Changes:

[Nigel Babu] Revert "run-tests.sh: provide retry count option"

[Jeff Darcy] xdr: Fix build errors due to missing xdr symbol when building against

[R.Shyamsundar] metrics: provide options to dump metrics from xlators

[Jeff Darcy] nfs: Check if FQDN is authorized before unmounting clients

[Jeff Darcy] dht/crypt/tier: Fix use of booleans as integers

[Jeff Darcy] libglusterfs: specify ctx in gf_log_set_loglevel

[Jeff Darcy] gfapi: fix issue when glfs_set_logging is called concurrently

[Jeff Darcy] performance/io-threads: Reduce the number of timing calls in iot_worker

[Jeff Darcy] rpc: Fix format warnings when using IPV6_DEFAULT

[Jeff Darcy] Fixes gNFSd gf_update_latency crashes

[atin] glusterd: Free up svc->conn on volume delete

------------------------------------------
[...truncated 316.77 KB...]
ok 25, LINENUM:99
ok 26, LINENUM:100
ok 27, LINENUM:101
ok 28, LINENUM:106
ok 29, LINENUM:108
 
Volume Name: patchy
Type: Disperse
Volume ID: d0ca7b19-2de7-4d5a-9de0-a2f9d3178d51
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/ec1
Brick2: 127.1.1.2:/d/backends/1/ec2
Brick3: 127.1.1.3:/d/backends/1/ec3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 30, LINENUM:114
ok 31, LINENUM:116
ok 32, LINENUM:117
ok 33, LINENUM:120
ok 34, LINENUM:123
ok 35, LINENUM:129
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               62        0Bytes           500             0             0            completed        0:03:05
                               127.1.1.2               54        0Bytes           500             0             0          in progress        0:03:05
                               127.1.1.3               56        0Bytes           507             0             0          in progress        0:03:05
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 36, LINENUM:137
not ok 37 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 38 , LINENUM:139
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
Failed 2/38 subtests 

Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 0 Tests: 38 Failed: 2)
  Failed tests:  37-38
Files=1, Tests=38, 518 wallclock secs ( 0.04 usr  0.00 sys + 20.42 cusr 21.38 csys = 41.84 CPU)
Result: FAIL
./tests/basic/distribute/rebal-all-nodes-migrate.t: bad status 1

       *********************************
       *       REGRESSION FAILED       *
       * Retrying failed tests in case *
       * we got some spurious failures *
       *********************************

volume stop: patchy: failed: Staging failed on 127.1.1.3. Error: rebalance session is in progress for the volume 'patchy'
Staging failed on 127.1.1.2. Error: rebalance session is in progress for the volume 'patchy'
volume delete: patchy: failed: Volume patchy has been started.Volume needs to be stopped before deletion.
./tests/basic/distribute/rebal-all-nodes-migrate.t .. 
1..38
ok 1, LINENUM:28
ok 2, LINENUM:29
ok 3, LINENUM:30
ok 4, LINENUM:31
ok 5, LINENUM:35
ok 6, LINENUM:37
 
Volume Name: patchy
Type: Distribute
Volume ID: b653e9c7-d62d-47b0-9a46-2e48f3bbf67e
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/dist1
Brick2: 127.1.1.1:/d/backends/1/dist2
Brick3: 127.1.1.2:/d/backends/2/dist3
Brick4: 127.1.1.2:/d/backends/2/dist4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 7, LINENUM:43
ok 8, LINENUM:45
ok 9, LINENUM:46
ok 10, LINENUM:49
ok 11, LINENUM:52
ok 12, LINENUM:58
ok 13, LINENUM:59
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               68        0Bytes           229             0             0            completed        0:00:34
                               127.1.1.2               96        0Bytes           271             0             0            completed        0:00:43
volume rebalance: patchy: success
ok 14, LINENUM:63
ok 15, LINENUM:64
ok 16, LINENUM:65
ok 17, LINENUM:71
ok 18, LINENUM:73
 
Volume Name: patchy
Type: Distributed-Replicate
Volume ID: d2ce3717-b106-4ada-9380-9bf937af789a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/drep1
Brick2: 127.1.1.2:/d/backends/2/drep1
Brick3: 127.1.1.1:/d/backends/1/drep2
Brick4: 127.1.1.2:/d/backends/2/drep2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
ok 19, LINENUM:79
ok 20, LINENUM:81
ok 21, LINENUM:82
ok 22, LINENUM:85
ok 23, LINENUM:88
ok 24, LINENUM:94
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost              132        0Bytes           500             0             0            completed        0:03:25
                               127.1.1.2              129        0Bytes           502             0             0            completed        0:03:25
volume rebalance: patchy: success
ok 25, LINENUM:99
ok 26, LINENUM:100
ok 27, LINENUM:101
ok 28, LINENUM:106
ok 29, LINENUM:108
 
Volume Name: patchy
Type: Disperse
Volume ID: 91dfa69d-fecc-4004-8326-73e7fc824095
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 127.1.1.1:/d/backends/1/ec1
Brick2: 127.1.1.2:/d/backends/1/ec2
Brick3: 127.1.1.3:/d/backends/1/ec3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
ok 30, LINENUM:114
ok 31, LINENUM:116
ok 32, LINENUM:117
ok 33, LINENUM:120
ok 34, LINENUM:123
not ok 35 Got "1" instead of "0", LINENUM:129
FAILED COMMAND: 0 cluster_rebalance_completed
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost               71        0Bytes           500             0             0            completed        0:05:38
                               127.1.1.2               67        0Bytes           502             0             0          in progress        0:06:03
                               127.1.1.3               67        0Bytes           507             0             0          in progress        0:06:03
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: patchy: success
ok 36, LINENUM:137
not ok 37 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume stop patchy
not ok 38 , LINENUM:139
FAILED COMMAND: gluster --mode=script --wignore --glusterd-sock=/d/backends/1/glusterd/gd.sock --log-file=/build/install/var/log/glusterfs/rebal-all-nodes-migrate.t_cli1.log volume delete patchy
Failed 3/38 subtests 

Test Summary Report
-------------------
./tests/basic/distribute/rebal-all-nodes-migrate.t (Wstat: 0 Tests: 38 Failed: 3)
  Failed tests:  35, 37-38
Files=1, Tests=38, 716 wallclock secs ( 0.02 usr  0.02 sys + 12153368.64 cusr 6076699.08 csys = 18230067.76 CPU)
Result: FAIL
End of test ./tests/basic/distribute/rebal-all-nodes-migrate.t
================================================================================


Run complete
================================================================================
Number of tests found:                             49
Number of tests selected for run based on pattern: 49
Number of tests skipped as they were marked bad:   2
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     47

Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/distribute/rebal-all-nodes-migrate.t  -  518 second
./tests/basic/afr/split-brain-favorite-child-policy.t  -  275 second
./tests/basic/afr/lk-quorum.t  -  257 second
./tests/basic/afr/self-heald.t  -  194 second
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t  -  163 second
./tests/basic/afr/self-heal.t  -  146 second
./tests/basic/afr/sparse-file-self-heal.t  -  141 second
./tests/basic/afr/gfid-mismatch-resolution-with-cli.t  -  114 second
./tests/basic/afr/entry-self-heal.t  -  93 second
./tests/basic/afr/split-brain-heal-info.t  -  89 second
./tests/basic/afr/metadata-self-heal.t  -  64 second
./tests/basic/afr/inodelk.t  -  59 second
./tests/basic/afr/split-brain-healing.t  -  53 second
./tests/basic/afr/quorum.t  -  50 second
./tests/basic/afr/arbiter-add-brick.t  -  48 second
./tests/basic/afr/arbiter.t  -  47 second
./tests/basic/afr/granular-esh/conservative-merge.t  -  45 second
./tests/basic/afr/durability-off.t  -  35 second
./tests/basic/afr/arbiter-mount.t  -  32 second
./tests/basic/afr/gfid-self-heal.t  -  31 second
./tests/basic/afr/split-brain-open.t  -  28 second
./tests/basic/afr/split-brain-resolution.t  -  27 second
./tests/basic/afr/granular-esh/granular-esh.t  -  24 second
./tests/basic/afr/granular-esh/replace-brick.t  -  23 second
./tests/basic/afr/arbiter-remove-brick.t  -  23 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t  -  23 second
./tests/basic/afr/replace-brick-self-heal.t  -  23 second
./tests/basic/afr/read-subvol-entry.t  -  22 second
./tests/basic/afr/heal-quota.t  -  21 second
./tests/basic/afr/add-brick-self-heal.t  -  21 second
./tests/basic/afr/granular-esh/add-brick.t  -  21 second
./tests/basic/afr/resolve.t  -  20 second
./tests/basic/afr/heal-info.t  -  19 second
./tests/basic/afr/client-side-heal.t  -  18 second
./tests/basic/afr/data-self-heal.t  -  18 second
./tests/basic/afr/stale-file-lookup.t  -  17 second
./tests/basic/afr/root-squash-self-heal.t  -  16 second
./tests/basic/afr/gfid-heal.t  -  15 second
./tests/basic/afr/read-subvol-data.t  -  15 second
./tests/basic/afr/gfid-mismatch.t  -  15 second
./tests/basic/cdc.t  -  14 second
./tests/basic/afr/arbiter-statfs.t  -  12 second
./tests/basic/afr/compounded-write-txns.t  -  10 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  10 second
./tests/basic/afr/arbiter-cli.t  -  4 second
./tests/basic/bd.t  -  1 second
./tests/basic/0symbol-check.t  -  0 second

1 test(s) failed 
./tests/basic/distribute/rebal-all-nodes-migrate.t

0 test(s) generated core 


Result is 1

tar: Removing leading / from absolute path names in the archive
Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20171207135701.tgz
error: fatal: change is closed

fatal: one or more reviews failed; review output above
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list