[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #2142

jenkins at build.gluster.org jenkins at build.gluster.org
Sun Nov 27 02:40:24 UTC 2016


See <http://build.gluster.org/job/regression-test-burn-in/2142/>

------------------------------------------
[...truncated 9704 lines...]
ok 167, LINENUM:346
ok 168, LINENUM:362
ok 169, LINENUM:364
ok 170, LINENUM:366
ok 171, LINENUM:368
ok 172, LINENUM:370
ok 173, LINENUM:372
ok 174, LINENUM:374
ok 175, LINENUM:376
ok 176, LINENUM:378
ok 177, LINENUM:380
ok 178, LINENUM:382
ok
All tests successful.
Files=1, Tests=178, 61 wallclock secs ( 0.06 usr  0.01 sys + 11.35 cusr 12.38 csys = 23.80 CPU)
Result: PASS
End of test ./tests/basic/uss.t
================================================================================


================================================================================
[02:39:08] Running tests in file ./tests/basic/volume-snapshot-clone.t
  allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support.
  Falling back to native LVM signature detection.
  allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support.
  Falling back to native LVM signature detection.
  allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support.
  Falling back to native LVM signature detection.
snapshot delete: failed: Commit failed on 127.1.1.3. Snapshot patchy2_snap might not be in an usable state.
volume delete: patchy2: failed: Staging failed on 127.1.1.3. Error: Cannot delete Volume patchy2 ,as it has 1 snapshots. To delete the volume, first delete all the snapshots under it.
./tests/basic/volume-snapshot-clone.t .. 
1..50
ok 1, LINENUM:40
ok 2, LINENUM:42
ok 3, LINENUM:43
ok 4, LINENUM:45
ok 5, LINENUM:46
ok 6, LINENUM:47
volume create: patchy2: success: please start the volume to access data
volume create: patchy: success: please start the volume to access data
ok 7, LINENUM:50
ok 8, LINENUM:51
volume start: patchy: success
volume start: patchy2: success
ok 9, LINENUM:54
ok 10, LINENUM:55
ok 11, LINENUM:57
snapshot create: success: Snap patchy_snap created successfully
snapshot create: success: Snap patchy2_snap created successfully
ok 12, LINENUM:63
ok 13, LINENUM:64
ok 14, LINENUM:67
ok 15, LINENUM:68
ok 16, LINENUM:70
ok 17, LINENUM:71
ok 18, LINENUM:73
ok 19, LINENUM:74
ok 20, LINENUM:76
ok 21, LINENUM:77
ok 22, LINENUM:80
ok 23, LINENUM:81
ok 24, LINENUM:83
ok 25, LINENUM:84
ok 26, LINENUM:86
ok 27, LINENUM:87
ok 28, LINENUM:89
ok 29, LINENUM:90
ok 30, LINENUM:92
ok 31, LINENUM:93
ok 32, LINENUM:95
ok 33, LINENUM:97
ok 34, LINENUM:98
ok 35, LINENUM:100
ok 36, LINENUM:101
ok 37, LINENUM:103
ok 38, LINENUM:104
ok 39, LINENUM:106
ok 40, LINENUM:107
ok 41, LINENUM:109
ok 42, LINENUM:110
volume stop: patchy: success
volume stop: patchy2: success
ok 43, LINENUM:114
ok 44, LINENUM:115
ok 45, LINENUM:117
not ok 46 , LINENUM:118
FAILED COMMAND: delete_snapshot patchy2_snap
ok 47, LINENUM:120
ok 48, LINENUM:121
volume delete: patchy: success
ok 49, LINENUM:124
not ok 50 Got "Y" instead of "N", LINENUM:125
FAILED COMMAND: N volume_exists patchy2
Failed 2/50 subtests 

Test Summary Report
-------------------
./tests/basic/volume-snapshot-clone.t (Wstat: 0 Tests: 50 Failed: 2)
  Failed tests:  46, 50
Files=1, Tests=50, 73 wallclock secs ( 0.03 usr  0.01 sys + 12.66 cusr 10.35 csys = 23.05 CPU)
Result: FAIL
End of test ./tests/basic/volume-snapshot-clone.t
================================================================================


Run complete
================================================================================
Number of tests found:                             127
Number of tests selected for run based on pattern: 127
Number of tests skipped as they were marked bad:   6
Number of tests skipped because of known_issues:   1
Number of tests that were run:                     120

1 test(s) failed 
./tests/basic/volume-snapshot-clone.t

0 test(s) generated core 


Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/split-brain-favorite-child-policy.t  -  597 second
./tests/basic/ec/ec-background-heals.t  -  547 second
./tests/basic/afr/self-heald.t  -  407 second
./tests/basic/ec/ec-12-4.t  -  319 second
./tests/basic/afr/entry-self-heal.t  -  200 second
./tests/basic/ec/ec-7-3.t  -  195 second
./tests/basic/tier/tier-heald.t  -  171 second
./tests/basic/ec/ec-6-2.t  -  167 second
./tests/basic/glusterd/heald.t  -  142 second
./tests/basic/ec/ec-5-1.t  -  142 second
./tests/basic/ec/ec-5-2.t  -  141 second
./tests/basic/tier/legacy-many.t  -  137 second
./tests/basic/afr/self-heal.t  -  131 second
./tests/basic/tier/tier.t  -  127 second
./tests/basic/ec/ec-4-1.t  -  117 second
./tests/basic/afr/granular-esh/conservative-merge.t  -  113 second
./tests/basic/afr/granular-esh/granular-esh.t  -  101 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t  -  100 second
./tests/basic/afr/granular-esh/add-brick.t  -  100 second
./tests/basic/afr/add-brick-self-heal.t  -  98 second
./tests/basic/ec/ec-root-heal.t  -  97 second
./tests/basic/ec/ec-new-entry.t  -  97 second
./tests/basic/ec/ec-3-1.t  -  93 second
./tests/basic/afr/split-brain-healing.t  -  82 second
./tests/basic/afr/split-brain-heal-info.t  -  81 second
./tests/basic/quota.t  -  77 second
./tests/basic/volume-snapshot-clone.t  -  73 second
./tests/basic/afr/metadata-self-heal.t  -  73 second
./tests/basic/tier/new-tier-cmds.t  -  70 second
./tests/basic/tier/tierd_check.t  -  69 second
./tests/basic/ec/self-heal.t  -  67 second
./tests/basic/afr/sparse-file-self-heal.t  -  66 second
./tests/basic/uss.t  -  61 second
./tests/basic/tier/frequency-counters.t  -  61 second
./tests/basic/tier/fops-during-migration-pause.t  -  52 second
./tests/basic/ec/ec-cpu-extensions.t  -  47 second
./tests/basic/tier/unlink-during-migration.t  -  45 second
./tests/basic/mount-nfs-auth.t  -  44 second
./tests/basic/ec/ec-notify.t  -  44 second
./tests/basic/jbr/jbr.t  -  42 second
./tests/basic/tier/locked_file_migration.t  -  41 second
./tests/basic/ec/ec-readdir.t  -  41 second
./tests/basic/mgmt_v3-locks.t  -  39 second
./tests/basic/ec/ec-anonymous-fd.t  -  39 second
./tests/basic/afr/arbiter.t  -  39 second
./tests/basic/afr/data-self-heal.t  -  38 second
./tests/basic/quota-ancestry-building.t  -  36 second
./tests/basic/ec/ec.t  -  36 second
./tests/basic/afr/quorum.t  -  36 second
./tests/basic/afr/inodelk.t  -  28 second
./tests/basic/gfapi/gfapi-ssl-test.t  -  27 second
./tests/basic/geo-replication/marker-xattrs.t  -  27 second
./tests/basic/afr/durability-off.t  -  27 second
./tests/basic/tier/file_with_spaces.t  -  26 second
./tests/basic/afr/arbiter-add-brick.t  -  26 second
./tests/basic/op_errnos.t  -  25 second
./tests/basic/glusterd/volfile_server_switch.t  -  25 second
./tests/basic/ec/quota.t  -  25 second
./tests/basic/afr/heal-quota.t  -  25 second
./tests/basic/tier/readdir-during-migration.t  -  24 second
./tests/basic/afr/gfid-self-heal.t  -  23 second
./tests/basic/0symbol-check.t  -  23 second
./tests/basic/afr/replace-brick-self-heal.t  -  20 second
./tests/basic/glusterd/disperse-create.t  -  19 second
./tests/basic/afr/granular-esh/replace-brick.t  -  19 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  18 second
./tests/basic/ec/statedump.t  -  18 second
./tests/basic/afr/split-brain-resolution.t  -  18 second
./tests/basic/afr/resolve.t  -  18 second
./tests/basic/cdc.t  -  17 second
./tests/basic/bd.t  -  17 second
./tests/basic/afr/client-side-heal.t  -  17 second
./tests/basic/afr/stale-file-lookup.t  -  16 second
./tests/basic/afr/root-squash-self-heal.t  -  16 second
./tests/basic/tier/ctr-rename-overwrite.t  -  15 second
./tests/basic/rpc-coverage.t  -  15 second
./tests/basic/quota-anon-fd-nfs.t  -  15 second
./tests/basic/nufa.t  -  15 second
./tests/basic/inode-quota-enforcing.t  -  15 second
./tests/basic/quota-nfs.t  -  14 second
./tests/basic/pump.t  -  14 second
./tests/basic/glusterd/arbiter-volume.t  -  14 second
./tests/basic/afr/read-subvol-data.t  -  14 second
./tests/basic/mount.t  -  13 second
./tests/basic/ec/ec-read-policy.t  -  13 second
./tests/basic/stats-dump.t  -  12 second
./tests/basic/fop-sampling.t  -  12 second
./tests/basic/afr/heal-info.t  -  12 second
./tests/basic/afr/arbiter-mount.t  -  12 second
./tests/basic/meta.t  -  11 second
./tests/basic/gfapi/bug1291259.t  -  11 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  11 second
./tests/basic/afr/read-subvol-entry.t  -  11 second
./tests/basic/afr/arbiter-remove-brick.t  -  11 second
./tests/basic/pgfid-feat.t  -  10 second
./tests/basic/gfapi/anonymous_fd.t  -  10 second
./tests/basic/afr/gfid-heal.t  -  10 second
./tests/basic/afr/compounded-write-txns.t  -  10 second
./tests/basic/afr/arbiter-statfs.t  -  10 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  9 second
./tests/basic/gfapi/gfapi-dup.t  -  9 second
./tests/basic/gfapi/bug-1241104.t  -  9 second
./tests/basic/ec/ec-internal-xattrs.t  -  9 second
./tests/basic/ec/dht-rename.t  -  9 second
./tests/basic/distribute/throttle-rebal.t  -  9 second
./tests/basic/afr/gfid-mismatch.t  -  9 second
./tests/basic/quota-rename.t  -  8 second
./tests/basic/jbr/jbr-volgen.t  -  8 second
./tests/basic/gfid-access.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/gfapi/gfapi-trunc.t  -  8 second
./tests/basic/gfapi/gfapi-async-calls-test.t  -  8 second
./tests/basic/fops-sanity.t  -  8 second
./tests/basic/ec/nfs.t  -  8 second
./tests/basic/afr/arbiter-cli.t  -  7 second
./tests/basic/rpm.t  -  2 second
./tests/basic/posixonly.t  -  1 second
./tests/basic/netgroup_parsing.t  -  1 second
./tests/basic/exports_parsing.t  -  1 second
./tests/basic/first-test.t  -  0 second

Result is 1

tar: Removing leading `/' from member names
tar: /var/log/messages: file changed as we read it
Logs archived in http://slave22.cloud.gluster.org/logs/glusterfs-logs-regression-test-burn-in-2142.tgz
kernel.core_pattern = /%e-%p.core
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list