[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #1077

jenkins at build.gluster.org jenkins at build.gluster.org
Wed Jun 1 12:52:50 UTC 2016


See <http://build.gluster.org/job/regression-test-burn-in/1077/>

------------------------------------------
[...truncated 6814 lines...]
================================================================================


================================================================================
[12:52:23] Running tests in file ./tests/basic/netgroup_parsing.t
./tests/basic/netgroup_parsing.t .. 
1..5
ok 1, LINENUM:42
ok 2, LINENUM:43
ok 3, LINENUM:44
ok 4, LINENUM:47
ok 5, LINENUM:48
ok
All tests successful.
Files=1, Tests=5,  0 wallclock secs ( 0.02 usr  0.00 sys +  0.05 cusr  0.09 csys =  0.16 CPU)
Result: PASS
End of test ./tests/basic/netgroup_parsing.t
================================================================================


================================================================================
[12:52:23] Running tests in file ./tests/basic/nufa.t
No volumes present
tar: Removing leading `/' from member names
./tests/basic/nufa.t .. 
1..15
ok 1, LINENUM:9
ok 2, LINENUM:10
ok 3, LINENUM:11
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:20
ok 10, LINENUM:22
ok 11, LINENUM:23
ok 12, LINENUM:26
ok 13, LINENUM:32
ok 14, LINENUM:34
ok 15, LINENUM:37
ok
All tests successful.
Files=1, Tests=15, 10 wallclock secs ( 0.02 usr  0.01 sys +  0.96 cusr  0.38 csys =  1.37 CPU)
Result: PASS
End of test ./tests/basic/nufa.t
================================================================================


================================================================================
[12:52:33] Running tests in file ./tests/basic/op_errnos.t
fallocate: /d/backends/patchy_snap_vhd: fallocate failed: Operation not supported
losetup: /d/backends/patchy_snap_vhd: warning: file smaller than 512 bytes, the loop device maybe be useless or invisible for system tools.
  Device /d/backends/patchy_snap_loop not found (or ignored by filtering).
  Device /d/backends/patchy_snap_loop not found (or ignored by filtering).
  Unable to add physical volume '/d/backends/patchy_snap_loop' to volume group 'patchy_snap_vg_1'.
  Volume group "patchy_snap_vg_1" not found
  Cannot process volume group patchy_snap_vg_1
  Volume group "patchy_snap_vg_1" not found
  Cannot process volume group patchy_snap_vg_1
/dev/patchy_snap_vg_1/brick_lvm: No such file or directory
Usage: mkfs.xfs
/* blocksize */		[-b log=n|size=num]
/* data subvol */	[-d agcount=n,agsize=n,file,name=xxx,size=num,
			    (sunit=value,swidth=value|su=num,sw=num),
			    sectlog=n|sectsize=num
/* inode size */	[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
			    projid32bit=0|1]
/* log subvol */	[-l agnum=n,internal,size=num,logdev=xxx,version=n
			    sunit=value|su=num,sectlog=n|sectsize=num,
			    lazy-count=0|1]
/* label */		[-L label (maximum 12 characters)]
/* naming */		[-n log=n|size=num,version=2|ci]
/* prototype file */	[-p fname]
/* quiet */		[-q]
/* realtime subvol */	[-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */	[-s log=n|size=num]
/* version */		[-V]
			devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
mount: special device /dev/patchy_snap_vg_1/brick_lvm does not exist
tar: Removing leading `/' from member names
./tests/basic/op_errnos.t .. 
1..21
ok 1, LINENUM:12
ok 2, LINENUM:13
ok 3, LINENUM:14
ok 4, LINENUM:16
ok 5, LINENUM:18
ok 6, LINENUM:19
ok 7, LINENUM:20
ok 8, LINENUM:21
ok 9, LINENUM:23
ok 10, LINENUM:24
ok 11, LINENUM:25
not ok 12 Got "  30807" instead of "30809", LINENUM:26
FAILED COMMAND: 30809 get-op_errno-xml snapshot restore snap1
ok 13, LINENUM:27
ok 14, LINENUM:28
ok 15, LINENUM:29
ok 16, LINENUM:30
not ok 17 Got "  30815" instead of "30812", LINENUM:31
FAILED COMMAND: 30812 get-op_errno-xml snapshot create snap1 patchy no-timestamp
ok 18, LINENUM:32
ok 19, LINENUM:34
ok 20, LINENUM:35
ok 21, LINENUM:36
Failed 2/21 subtests 

Test Summary Report
-------------------
./tests/basic/op_errnos.t (Wstat: 0 Tests: 21 Failed: 2)
  Failed tests:  12, 17
Files=1, Tests=21, 16 wallclock secs ( 0.02 usr  0.00 sys +  4.28 cusr  0.86 csys =  5.16 CPU)
Result: FAIL
End of test ./tests/basic/op_errnos.t
================================================================================


Run complete
================================================================================
Number of tests found:                             81
Number of tests selected for run based on pattern: 81
Number of tests skipped as they were marked bad:   1
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     80

1 test(s) failed 
./tests/basic/op_errnos.t

0 test(s) generated core 


Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/split-brain-favorite-child-policy.t  -  564 second
./tests/basic/ec/ec-12-4.t  -  321 second
./tests/basic/ec/ec-7-3.t  -  191 second
./tests/basic/ec/ec-6-2.t  -  165 second
./tests/basic/afr/entry-self-heal.t  -  159 second
./tests/basic/ec/ec-5-2.t  -  140 second
./tests/basic/ec/ec-5-1.t  -  139 second
./tests/basic/glusterd/heald.t  -  135 second
./tests/basic/ec/ec-4-1.t  -  113 second
./tests/basic/afr/self-heal.t  -  112 second
./tests/basic/ec/ec-root-heal.t  -  108 second
./tests/basic/afr/granular-esh/granular-esh.t  -  97 second
./tests/basic/afr/granular-esh/add-brick.t  -  93 second
./tests/basic/afr/add-brick-self-heal.t  -  93 second
./tests/basic/ec/ec-new-entry.t  -  89 second
./tests/basic/ec/ec-3-1.t  -  89 second
./tests/basic/afr/split-brain-heal-info.t  -  73 second
./tests/basic/afr/split-brain-healing.t  -  70 second
./tests/basic/afr/self-heald.t  -  70 second
./tests/basic/ec/self-heal.t  -  64 second
./tests/basic/afr/sparse-file-self-heal.t  -  63 second
./tests/basic/afr/metadata-self-heal.t  -  57 second
./tests/basic/ec/ec-background-heals.t  -  48 second
./tests/basic/mount-nfs-auth.t  -  37 second
./tests/basic/ec/ec-anonymous-fd.t  -  36 second
./tests/basic/ec/ec-notify.t  -  32 second
./tests/basic/afr/arbiter.t  -  30 second
./tests/basic/ec/ec.t  -  29 second
./tests/basic/afr/data-self-heal.t  -  28 second
./tests/basic/jbr/jbr.t  -  27 second
./tests/basic/afr/quorum.t  -  23 second
./tests/basic/mgmt_v3-locks.t  -  22 second
./tests/basic/afr/durability-off.t  -  22 second
./tests/basic/afr/arbiter-add-brick.t  -  20 second
./tests/basic/ec/quota.t  -  19 second
./tests/basic/0symbol-check.t  -  19 second
./tests/basic/ec/ec-readdir.t  -  18 second
./tests/basic/afr/gfid-self-heal.t  -  18 second
./tests/basic/op_errnos.t  -  17 second
./tests/basic/glusterd/volfile_server_switch.t  -  17 second
./tests/basic/geo-replication/marker-xattrs.t  -  17 second
./tests/basic/afr/split-brain-resolution.t  -  14 second
./tests/basic/afr/replace-brick-self-heal.t  -  14 second
./tests/basic/afr/heal-quota.t  -  14 second
./tests/basic/afr/granular-esh/replace-brick.t  -  14 second
./tests/basic/afr/resolve.t  -  13 second
./tests/basic/ec/statedump.t  -  12 second
./tests/basic/bd.t  -  12 second
./tests/basic/afr/client-side-heal.t  -  12 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  11 second
./tests/basic/afr/stale-file-lookup.t  -  11 second
./tests/basic/afr/root-squash-self-heal.t  -  11 second
./tests/basic/nufa.t  -  10 second
./tests/basic/glusterd/disperse-create.t  -  10 second
./tests/basic/cdc.t  -  10 second
./tests/basic/afr/read-subvol-data.t  -  10 second
./tests/basic/mount.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/glusterd/arbiter-volume.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/afr/heal-info.t  -  8 second
./tests/basic/afr/arbiter-mount.t  -  8 second
./tests/basic/meta.t  -  7 second
./tests/basic/fop-sampling.t  -  7 second
./tests/basic/afr/read-subvol-entry.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  6 second
./tests/basic/afr/gfid-mismatch.t  -  6 second
./tests/basic/afr/arbiter-statfs.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/basic/fops-sanity.t  -  5 second
./tests/basic/ec/nfs.t  -  5 second
./tests/basic/ec/ec-internal-xattrs.t  -  5 second
./tests/basic/distribute/throttle-rebal.t  -  5 second
./tests/basic/jbr/jbr-volgen.t  -  4 second
./tests/basic/gfid-access.t  -  4 second
./tests/basic/exports_parsing.t  -  1 second
./tests/basic/netgroup_parsing.t  -  0 second
./tests/basic/first-test.t  -  0 second

Result is 1

+ RET=1
++ wc -l
++ ls -l /glusterfsd-21963.core /glusterfsd-21982.core
+ cur_count=2
++ ls /glusterfsd-21963.core /glusterfsd-21982.core
+ cur_cores='/glusterfsd-21963.core
/glusterfsd-21982.core'
+ '[' 2 '!=' 2 ']'
+ '[' 1 -ne 0 ']'
+ filename=logs/glusterfs-logs-20160601:11:49:31.tgz
+ tar -czf /archives/logs/glusterfs-logs-20160601:11:49:31.tgz /var/log/glusterfs /var/log/messages /var/log/messages-20160320.gz /var/log/messages-20160327.gz /var/log/messages-20160403.gz /var/log/messages-20160410.gz /var/log/messages-20160417.gz /var/log/messages-20160508 /var/log/messages-20160515 /var/log/messages-20160522 /var/log/messages-20160529
tar: Removing leading `/' from member names
tar (child): /archives/logs/glusterfs-logs-20160601\:11\:49\:31.tgz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
+ echo Logs archived in http://slave29.cloud.gluster.org/logs/glusterfs-logs-20160601:11:49:31.tgz
Logs archived in http://slave29.cloud.gluster.org/logs/glusterfs-logs-20160601:11:49:31.tgz
+ case $(uname -s) in
++ uname -s
+ /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
kernel.core_pattern = /%e-%p.core
+ exit 1
+ RET=1
+ '[' 1 = 0 ']'
+ V=-1
+ VERDICT=FAILED
+ '[' 0 -eq 1 ']'
+ exit 1
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list