[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #723

jenkins at build.gluster.org jenkins at build.gluster.org
Tue Jun 7 11:56:28 UTC 2016


See <http://build.gluster.org/job/regression-test-burn-in/723/changes>

Changes:

[Kaleb S. KEITHLEY] nfs : store sattr properly in nfs3_setattr() call

------------------------------------------
[...truncated 1025 lines...]
ok 82, LINENUM:197
ok 83, LINENUM:199
ok 84, LINENUM:200
ok 85, LINENUM:201
ok 86, LINENUM:203
ok 87, LINENUM:204
ok 88, LINENUM:205
ok 89, LINENUM:207
ok 90, LINENUM:208
ok 91, LINENUM:209
ok 92, LINENUM:211
ok 93, LINENUM:212
ok 94, LINENUM:213
ok 95, LINENUM:215
ok 96, LINENUM:216
ok 97, LINENUM:217
ok 98, LINENUM:219
ok 99, LINENUM:220
ok 100, LINENUM:221
ok 101, LINENUM:224
ok 102, LINENUM:227
ok 103, LINENUM:228
ok 104, LINENUM:229
ok 105, LINENUM:230
ok 106, LINENUM:231
ok 107, LINENUM:232
ok 108, LINENUM:233
ok 109, LINENUM:234
ok 110, LINENUM:235
ok 111, LINENUM:236
ok 112, LINENUM:237
ok 113, LINENUM:238
ok 114, LINENUM:239
ok 115, LINENUM:240
ok 116, LINENUM:241
ok 117, LINENUM:242
ok 118, LINENUM:243
ok 119, LINENUM:244
ok 120, LINENUM:245
ok 121, LINENUM:246
ok 122, LINENUM:247
ok 123, LINENUM:251
ok 124, LINENUM:252
ok 125, LINENUM:253
ok 126, LINENUM:256
ok 127, LINENUM:257
ok 128, LINENUM:258
ok 129, LINENUM:261
ok 130, LINENUM:262
ok 131, LINENUM:263
ok 132, LINENUM:266
ok 133, LINENUM:267
ok 134, LINENUM:268
ok 135, LINENUM:271
ok 136, LINENUM:272
ok 137, LINENUM:273
ok 138, LINENUM:276
ok 139, LINENUM:277
ok 140, LINENUM:278
ok 141, LINENUM:281
ok 142, LINENUM:282
ok 143, LINENUM:283
ok 144, LINENUM:287
volume start: patchy: success
ok 145, LINENUM:289
ok 146, LINENUM:290
ok 147, LINENUM:292
ok 148, LINENUM:293
ok 149, LINENUM:294
ok 150, LINENUM:295
ok 151, LINENUM:296
ok 152, LINENUM:297
ok 153, LINENUM:298
ok 154, LINENUM:299
ok 155, LINENUM:300
ok 156, LINENUM:301
ok 157, LINENUM:302
ok 158, LINENUM:303
ok 159, LINENUM:304
ok 160, LINENUM:305
ok 161, LINENUM:306
volume start: patchy: success
not ok 162 Got "" instead of "1", LINENUM:309
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 163 Got "" instead of "1", LINENUM:310
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 164 Got "0" instead of "1", LINENUM:315
FAILED COMMAND: 1 count_index_entries /d/backends/patchy0
not ok 165 Got "0" instead of "1", LINENUM:316
FAILED COMMAND: 1 count_index_entries /d/backends/patchy1
not ok 166 , LINENUM:318
FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
fool_heal fool_me source_creations_heal/dir1
not ok 167 Got "" instead of "1", LINENUM:324
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 168 Got "" instead of "1", LINENUM:325
FAILED COMMAND: 1 afr_child_up_status patchy 0
Connection failed. Please check if gluster daemon is operational.
not ok 169 Got "" instead of "Y", LINENUM:328
FAILED COMMAND: Y glustershd_up_status
not ok 170 Got "" instead of "1", LINENUM:329
FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 0
not ok 171 Got "" instead of "1", LINENUM:330
FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 1
not ok 172 , LINENUM:332
FAILED COMMAND: gluster --mode=script --wignore volume heal patchy
not ok 173 Got "::fool_heal:fool_me" instead of "~", LINENUM:333
FAILED COMMAND: ~ print_pending_heals spb_heal spb_me_heal fool_heal fool_me v1_fool_heal v1_fool_me source_deletions_heal source_deletions_me source_creations_heal source_creations_me v1_dirty_heal v1_dirty_me source_self_accusing
ok 174, LINENUM:335
ok 175, LINENUM:336
not ok 176 Got "N000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001" instead of "Y000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", LINENUM:337
FAILED COMMAND: Y000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 heal_status /d/backends/patchy0 /d/backends/patchy1 fool_heal
not ok 177 Got "Y000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001" instead of "Y000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", LINENUM:338
FAILED COMMAND: Y000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 heal_status /d/backends/patchy0 /d/backends/patchy1 fool_me
ok 178, LINENUM:339
ok 179, LINENUM:340
ok 180, LINENUM:341
ok 181, LINENUM:342
ok 182, LINENUM:343
ok 183, LINENUM:344
ok 184, LINENUM:345
ok 185, LINENUM:346
ok 186, LINENUM:347
ok 187, LINENUM:351
ok 188, LINENUM:352
ok 189, LINENUM:354
ok 190, LINENUM:355
ok 191, LINENUM:357
not ok 192 , LINENUM:358
FAILED COMMAND: stat /d/backends/patchy1/fool_heal/1
not ok 193 , LINENUM:360
FAILED COMMAND: stat /d/backends/patchy0/fool_heal/0
ok 194, LINENUM:361
ok 195, LINENUM:363
ok 196, LINENUM:364
ok 197, LINENUM:366
ok 198, LINENUM:367
ok 199, LINENUM:369
ok 200, LINENUM:370
ok 201, LINENUM:373
ok 202, LINENUM:376
ok 203, LINENUM:377
ok 204, LINENUM:378
ok 205, LINENUM:379
ok 206, LINENUM:380
ok 207, LINENUM:381
ok 208, LINENUM:382
ok 209, LINENUM:383
ok 210, LINENUM:384
ok 211, LINENUM:385
ok 212, LINENUM:386
ok 213, LINENUM:387
ok 214, LINENUM:388
ok 215, LINENUM:389
ok 216, LINENUM:392
ok 217, LINENUM:393
ok 218, LINENUM:394
ok 219, LINENUM:395
ok 220, LINENUM:396
ok 221, LINENUM:397
ok 222, LINENUM:398
ok 223, LINENUM:399
ok 224, LINENUM:400
ok 225, LINENUM:401
ok 226, LINENUM:402
ok 227, LINENUM:403
ok 228, LINENUM:404
ok 229, LINENUM:405
ok 230, LINENUM:409
ok 231, LINENUM:410
ok 232, LINENUM:413
ok 233, LINENUM:416
ok 234, LINENUM:419
ok 235, LINENUM:420
ok 236, LINENUM:423
ok 237, LINENUM:424
ok 238, LINENUM:427
not ok 239 , LINENUM:428
FAILED COMMAND: [ -d /d/backends/patchy0/source_creations_heal/dir1/dir2 ]
ok 240, LINENUM:431
ok 241, LINENUM:432
<http://build.gluster.org/job/regression-test-burn-in/ws/>
Failed 17/241 subtests 

Test Summary Report
-------------------
./tests/basic/afr/entry-self-heal.t (Wstat: 0 Tests: 241 Failed: 17)
  Failed tests:  162-173, 176-177, 192-193, 239
Files=1, Tests=241, 326 wallclock secs ( 0.13 usr  0.02 sys + 28.65 cusr 21.82 csys = 50.62 CPU)
Result: FAIL
End of test ./tests/basic/afr/entry-self-heal.t
================================================================================


Run complete
================================================================================
Number of tests found:                             11
Number of tests selected for run based on pattern: 11
Number of tests skipped as they were marked bad:   0
Number of tests skipped because of known_issues:   0
Number of tests that were run:                     11

1 test(s) failed 
./tests/basic/afr/entry-self-heal.t

0 test(s) generated core 


Tests ordered by time taken, slowest to fastest: 
================================================================================
./tests/basic/afr/entry-self-heal.t  -  326 second
./tests/basic/afr/add-brick-self-heal.t  -  97 second
./tests/basic/afr/data-self-heal.t  -  34 second
./tests/basic/afr/arbiter.t  -  34 second
./tests/basic/afr/durability-off.t  -  22 second
./tests/basic/afr/arbiter-add-brick.t  -  21 second
./tests/basic/0symbol-check.t  -  20 second
./tests/basic/afr/client-side-heal.t  -  13 second
./tests/basic/afr/arbiter-mount.t  -  11 second
./tests/basic/afr/arbiter-statfs.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second

Result is 1

+ RET=1
++ wc -l
++ ls -l '/*.core'
+ cur_count=0
++ ls '/*.core'
+ cur_cores=
+ '[' 0 '!=' 0 ']'
+ '[' 1 -ne 0 ']'
+ filename=logs/glusterfs-logs-20160607:11:46:33.tgz
+ tar -czf /archives/logs/glusterfs-logs-20160607:11:46:33.tgz /var/log/glusterfs /var/log/messages /var/log/messages-20160515 /var/log/messages-20160522 /var/log/messages-20160529 /var/log/messages-20160605
tar: Removing leading `/' from member names
+ echo Logs archived in http://slave23.cloud.gluster.org/logs/glusterfs-logs-20160607:11:46:33.tgz
Logs archived in http://slave23.cloud.gluster.org/logs/glusterfs-logs-20160607:11:46:33.tgz
+ case $(uname -s) in
++ uname -s
+ /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
kernel.core_pattern = /%e-%p.core
+ exit 1
+ RET=1
+ '[' 1 = 0 ']'
+ V=-1
+ VERDICT=FAILED
+ '[' 0 -eq 1 ']'
+ exit 1
Build step 'Execute shell' marked build as failure


More information about the maintainers mailing list