[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #1782

jenkins at build.gluster.org jenkins at build.gluster.org
Wed Sep 28 01:11:38 UTC 2016


See <http://build.gluster.org/job/regression-test-burn-in/1782/>

------------------------------------------
[...truncated 7870 lines...]
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
mount.nfs: access denied by server while mounting slave28.cloud.gluster.org:/patchy/L1
umount2: Invalid argument
umount: /mnt/nfs/0: not mounted
umount2: Invalid argument
umount: /mnt/nfs/0: not mounted
umount2: Invalid argument
umount: /mnt/nfs/0: not mounted
umount2: Invalid argument
umount: /mnt/nfs/0: not mounted
umount2: Invalid argument
umount: /mnt/nfs/0: not mounted
./tests/basic/mount-nfs-auth.t .. 
1..89
ok 1, LINENUM:14
ok 2, LINENUM:15
ok 3, LINENUM:16
volume create: patchy: success: please start the volume to access data
ok 4, LINENUM:150
ok 5, LINENUM:151
ok 6, LINENUM:159
ok 7, LINENUM:164
ok 8, LINENUM:165
ok 9, LINENUM:168
ok 10, LINENUM:169
ok 11, LINENUM:173
ok 12, LINENUM:174
ok 13, LINENUM:175
volume stop: patchy: success
ok 14, LINENUM:183
volume start: patchy: success
ok 15, LINENUM:185
ok 16, LINENUM:188
ok 17, LINENUM:191
ok 18, LINENUM:192
ok 19, LINENUM:195
ok 20, LINENUM:198
ok 21, LINENUM:201
ok 22, LINENUM:204
ok 23, LINENUM:205
ok 24, LINENUM:207
ok 25, LINENUM:208
ok 26, LINENUM:211
ok 27, LINENUM:212
ok 28, LINENUM:217
ok 29, LINENUM:218
ok 30, LINENUM:219
ok 31, LINENUM:220
ok 32, LINENUM:223
ok 33, LINENUM:224
ok 34, LINENUM:226
ok 35, LINENUM:227
ok 36, LINENUM:230
ok 37, LINENUM:231
ok 38, LINENUM:233
ok 39, LINENUM:234
ok 40, LINENUM:238
ok 41, LINENUM:239
ok 42, LINENUM:241
ok 43, LINENUM:242
ok 44, LINENUM:243
ok 45, LINENUM:246
ok 46, LINENUM:247
ok 47, LINENUM:248
ok 48, LINENUM:250
ok 49, LINENUM:251
volume start: patchy: success
ok 50, LINENUM:255
ok 51, LINENUM:257
ok 52, LINENUM:258
ok 53, LINENUM:259
ok 54, LINENUM:260
ok 55, LINENUM:261
ok 56, LINENUM:263
ok 57, LINENUM:264
ok 58, LINENUM:265
ok 59, LINENUM:267
not ok 60 Got "N" instead of "Y", LINENUM:268
FAILED COMMAND: Y check_mount_success patchy/L1
not ok 61 Got "N" instead of "Y", LINENUM:269
FAILED COMMAND: Y umount_nfs /mnt/nfs/0
ok 62, LINENUM:272
volume start: patchy: success
ok 63, LINENUM:276
ok 64, LINENUM:278
ok 65, LINENUM:279
ok 66, LINENUM:280
ok 67, LINENUM:284
ok 68, LINENUM:285
ok 69, LINENUM:286
ok 70, LINENUM:288
ok 71, LINENUM:290
ok 72, LINENUM:291
ok 73, LINENUM:292
ok 74, LINENUM:294
ok 75, LINENUM:295
ok 76, LINENUM:296
volume stop: patchy: success
ok 77, LINENUM:301
volume start: patchy: success
ok 78, LINENUM:303
ok 79, LINENUM:305
ok 80, LINENUM:306
ok 81, LINENUM:308
ok 82, LINENUM:309
volume stop: patchy: success
ok 83, LINENUM:313
volume start: patchy: success
ok 84, LINENUM:315
ok 85, LINENUM:318
ok 86, LINENUM:321
ok 87, LINENUM:324
ok 88, LINENUM:325
ok 89, LINENUM:326
Failed 2/89 subtests 

Test Summary Report
-------------------
./tests/basic/mount-nfs-auth.t (Wstat: 0 Tests: 89 Failed: 2)
  Failed tests:  60-61
Files=1, Tests=89, 61 wallclock secs ( 0.05 usr  0.01 sys +  3.70 cusr  2.17 csys =  5.93 CPU)
Result: FAIL
End of test ./tests/basic/mount-nfs-auth.t
================================================================================


Run complete
================================================================================
Number of tests found:                             88
Number of tests selected for run based on pattern: 88
Number of tests skipped as they were marked bad:   2
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     86

1 test(s) failed 
./tests/basic/mount-nfs-auth.t

0 test(s) generated core 


Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/split-brain-favorite-child-policy.t  -  593 second
./tests/basic/afr/self-heald.t  -  397 second
./tests/basic/ec/ec-12-4.t  -  334 second
./tests/basic/ec/ec-background-heals.t  -  306 second
./tests/basic/ec/ec-7-3.t  -  201 second
./tests/basic/afr/entry-self-heal.t  -  186 second
./tests/basic/ec/ec-6-2.t  -  173 second
./tests/basic/ec/ec-5-2.t  -  148 second
./tests/basic/ec/ec-5-1.t  -  146 second
./tests/basic/glusterd/heald.t  -  139 second
./tests/basic/afr/self-heal.t  -  131 second
./tests/basic/ec/ec-4-1.t  -  120 second
./tests/basic/afr/granular-esh/conservative-merge.t  -  111 second
./tests/basic/afr/granular-esh/granular-esh.t  -  97 second
./tests/basic/afr/add-brick-self-heal.t  -  96 second
./tests/basic/afr/granular-esh/add-brick.t  -  95 second
./tests/basic/ec/ec-root-heal.t  -  94 second
./tests/basic/ec/ec-3-1.t  -  94 second
./tests/basic/ec/ec-new-entry.t  -  93 second
./tests/basic/afr/split-brain-healing.t  -  77 second
./tests/basic/afr/metadata-self-heal.t  -  75 second
./tests/basic/afr/split-brain-heal-info.t  -  72 second
./tests/basic/ec/self-heal.t  -  69 second
./tests/basic/afr/sparse-file-self-heal.t  -  68 second
./tests/basic/mount-nfs-auth.t  -  61 second
./tests/basic/ec/ec-cpu-extensions.t  -  38 second
./tests/basic/ec/ec-anonymous-fd.t  -  38 second
./tests/basic/afr/arbiter.t  -  36 second
./tests/basic/mgmt_v3-locks.t  -  34 second
./tests/basic/jbr/jbr.t  -  34 second
./tests/basic/ec/ec-notify.t  -  34 second
./tests/basic/ec/ec.t  -  33 second
./tests/basic/afr/data-self-heal.t  -  33 second
./tests/basic/ec/ec-readdir.t  -  29 second
./tests/basic/afr/quorum.t  -  28 second
./tests/basic/gfapi/gfapi-ssl-test.t  -  24 second
./tests/basic/geo-replication/marker-xattrs.t  -  23 second
./tests/basic/ec/quota.t  -  23 second
./tests/basic/afr/heal-quota.t  -  23 second
./tests/basic/afr/durability-off.t  -  23 second
./tests/basic/afr/arbiter-add-brick.t  -  22 second
./tests/basic/glusterd/volfile_server_switch.t  -  21 second
./tests/basic/afr/gfid-self-heal.t  -  21 second
./tests/basic/0symbol-check.t  -  21 second
./tests/basic/afr/replace-brick-self-heal.t  -  19 second
./tests/basic/afr/granular-esh/replace-brick.t  -  19 second
./tests/basic/afr/split-brain-resolution.t  -  18 second
./tests/basic/afr/client-side-heal.t  -  17 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  16 second
./tests/basic/glusterd/disperse-create.t  -  15 second
./tests/basic/ec/statedump.t  -  15 second
./tests/basic/bd.t  -  14 second
./tests/basic/afr/root-squash-self-heal.t  -  14 second
./tests/basic/afr/resolve.t  -  14 second
./tests/basic/cdc.t  -  13 second
./tests/basic/afr/stale-file-lookup.t  -  13 second
./tests/basic/inode-quota-enforcing.t  -  12 second
./tests/basic/afr/heal-info.t  -  12 second
./tests/basic/glusterd/arbiter-volume.t  -  11 second
./tests/basic/afr/read-subvol-data.t  -  11 second
./tests/basic/meta.t  -  10 second
./tests/basic/fop-sampling.t  -  10 second
./tests/basic/ec/ec-read-policy.t  -  10 second
./tests/basic/afr/read-subvol-entry.t  -  10 second
./tests/basic/afr/arbiter-mount.t  -  10 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  9 second
./tests/basic/afr/arbiter-statfs.t  -  9 second
./tests/basic/afr/arbiter-remove-brick.t  -  9 second
./tests/basic/gfapi/gfapi-dup.t  -  8 second
./tests/basic/gfapi/bug-1241104.t  -  8 second
./tests/basic/gfapi/anonymous_fd.t  -  8 second
./tests/basic/ec/ec-internal-xattrs.t  -  8 second
./tests/basic/ec/dht-rename.t  -  8 second
./tests/basic/distribute/throttle-rebal.t  -  8 second
./tests/basic/afr/gfid-mismatch.t  -  8 second
./tests/basic/afr/gfid-heal.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/gfapi/gfapi-trunc.t  -  7 second
./tests/basic/fops-sanity.t  -  7 second
./tests/basic/ec/nfs.t  -  7 second
./tests/basic/jbr/jbr-volgen.t  -  6 second
./tests/basic/gfid-access.t  -  6 second
./tests/basic/afr/arbiter-cli.t  -  6 second
./tests/basic/exports_parsing.t  -  1 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  0 second
./tests/basic/first-test.t  -  0 second

Result is 1

tar: Removing leading `/' from member names
Logs archived in http://slave28.cloud.gluster.org/logs/glusterfs-logs-20160927:23:46:30.tgz
kernel.core_pattern = /%e-%p.core
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list