[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3019
jenkins at build.gluster.org
jenkins at build.gluster.org
Thu Jun 15 13:37:26 UTC 2017
See <http://build.gluster.org/job/regression-test-burn-in/3019/display/redirect?page=changes>
Changes:
[atin] glusterd: fix crash on statedump when no volumes are started
[Jeff Darcy] .testignore: if a file doesn't change any code/behavior, add it here
------------------------------------------
[...truncated 189.08 KB...]
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
grep: /var/run/gluster/: Is a directory
rm: cannot remove `/var/run/gluster/': Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
grep: /var/run/gluster/: Is a directory
rm: cannot remove `/var/run/gluster/': Is a directory
grep: /var/run/gluster/: Is a directory
rm: cannot remove `/var/run/gluster/': Is a directory
Launching heal operation to perform index self heal on volume patchy has been unsuccessful on bricks that are down. Please check if all brick processes are running.
cat: /mnt/glusterfs/0/file: No such file or directory
md5sum: /d/backends/patchy0/file: No such file or directory
./tests/basic/afr/../../include.rc: line 313: [: 6b2dcb5cc44a68fad62cf98a06fbf248: unary operator expected
umount2: Invalid argument
umount: /mnt/glusterfs/0: not mounted
./tests/basic/afr/split-brain-favorite-child-policy.t ..
1..136
ok 1, LINENUM:9
ok 2, LINENUM:10
ok 3, LINENUM:13
ok 4, LINENUM:14
ok 5, LINENUM:15
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
ok 11, LINENUM:21
ok 12, LINENUM:24
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:27
ok 16, LINENUM:28
ok 17, LINENUM:29
ok 18, LINENUM:30
ok 19, LINENUM:32
ok 20, LINENUM:33
ok 21, LINENUM:34
ok 22, LINENUM:35
ok 23, LINENUM:36
ok 24, LINENUM:37
ok 25, LINENUM:38
ok 26, LINENUM:39
ok 27, LINENUM:43
ok 28, LINENUM:46
ok 29, LINENUM:54
ok 30, LINENUM:55
not ok 31 Got "" instead of "Y", LINENUM:56
FAILED COMMAND: Y glustershd_up_status
not ok 32 Got "" instead of "1", LINENUM:57
FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 0
not ok 33 Got "" instead of "1", LINENUM:58
FAILED COMMAND: 1 afr_child_up_status_in_shd patchy 1
not ok 34 , LINENUM:59
FAILED COMMAND: gluster --mode=script --wignore volume heal patchy
not ok 35 Got "" instead of "^0$", LINENUM:60
FAILED COMMAND: ^0$ get_pending_heal_count patchy
ok 36, LINENUM:63
ok 37, LINENUM:64
not ok 38 , LINENUM:65
FAILED COMMAND: glusterfs --volfile-id=/patchy --volfile-server=slave22.cloud.gluster.org /mnt/glusterfs/0 --attribute-timeout=0 --entry-timeout=0
not ok 39 Got "1" instead of "0", LINENUM:67
FAILED COMMAND: 0 echo 1
not ok 40 , LINENUM:70
FAILED COMMAND: gluster --mode=script --wignore volume set patchy cluster.favorite-child-policy none
not ok 41 , LINENUM:71
FAILED COMMAND: gluster --mode=script --wignore volume set patchy cluster.self-heal-daemon off
ok 42, LINENUM:72
ok 43, LINENUM:73
not ok 44 , LINENUM:74
FAILED COMMAND: gluster --mode=script --wignore volume start patchy force
not ok 45 Got "" instead of "1", LINENUM:75
FAILED COMMAND: 1 brick_up_status patchy slave22.cloud.gluster.org /d/backends/patchy1
ok 46, LINENUM:76
ok 47, LINENUM:77
ok 48, LINENUM:78
ok 49, LINENUM:80
ok 50, LINENUM:81
ok 51, LINENUM:82
ok 52, LINENUM:83
ok 53, LINENUM:84
ok 54, LINENUM:85
ok 55, LINENUM:86
ok 56, LINENUM:87
not ok 57 Got "0" instead of "1", LINENUM:91
FAILED COMMAND: 1 echo 0
ok 58, LINENUM:95
ok 59, LINENUM:96
ok 60, LINENUM:97
ok 61, LINENUM:98
ok 62, LINENUM:99
ok 63, LINENUM:100
ok 64, LINENUM:101
ok 65, LINENUM:103
ok 66, LINENUM:105
ok 67, LINENUM:108
ok 68, LINENUM:109
ok 69, LINENUM:110
ok 70, LINENUM:111
ok 71, LINENUM:112
ok 72, LINENUM:113
ok 73, LINENUM:114
ok 74, LINENUM:115
ok 75, LINENUM:116
ok 76, LINENUM:118
ok 77, LINENUM:119
ok 78, LINENUM:120
ok 79, LINENUM:121
ok 80, LINENUM:122
ok 81, LINENUM:123
ok 82, LINENUM:124
ok 83, LINENUM:125
not ok 84 Got "0" instead of "1", LINENUM:129
FAILED COMMAND: 1 echo 0
ok 85, LINENUM:133
ok 86, LINENUM:134
ok 87, LINENUM:135
ok 88, LINENUM:136
ok 89, LINENUM:137
not ok 90 , LINENUM:138
FAILED COMMAND: gluster --mode=script --wignore volume heal patchy
ok 91, LINENUM:139
not ok 92 Got "1" instead of "0", LINENUM:141
FAILED COMMAND: 0 echo 1
not ok 93 , LINENUM:143
FAILED COMMAND: [ ee030d07cf4b2bcd126c8b1785d754dd == ]
not ok 94 , LINENUM:148
FAILED COMMAND: gluster --mode=script --wignore volume add-brick patchy replica 3 slave22.cloud.gluster.org:/d/backends/patchy2
ok 95, LINENUM:149
ok 96, LINENUM:150
ok 97, LINENUM:151
ok 98, LINENUM:152
ok 99, LINENUM:153
ok 100, LINENUM:154
ok 101, LINENUM:155
ok 102, LINENUM:156
ok 103, LINENUM:158
ok 104, LINENUM:159
ok 105, LINENUM:160
ok 106, LINENUM:161
ok 107, LINENUM:162
not ok 108 , LINENUM:163
FAILED COMMAND: gluster --mode=script --wignore volume start patchy force
ok 109, LINENUM:164
ok 110, LINENUM:165
ok 111, LINENUM:166
ok 112, LINENUM:167
ok 113, LINENUM:168
ok 114, LINENUM:170
ok 115, LINENUM:171
ok 116, LINENUM:172
ok 117, LINENUM:173
ok 118, LINENUM:174
ok 119, LINENUM:175
ok 120, LINENUM:176
ok 121, LINENUM:177
ok 122, LINENUM:178
ok 123, LINENUM:179
ok 124, LINENUM:180
not ok 125 Got "0" instead of "1", LINENUM:184
FAILED COMMAND: 1 echo 0
ok 126, LINENUM:188
ok 127, LINENUM:189
ok 128, LINENUM:190
ok 129, LINENUM:191
ok 130, LINENUM:192
ok 131, LINENUM:193
not ok 132 , LINENUM:194
FAILED COMMAND: gluster --mode=script --wignore volume heal patchy
ok 133, LINENUM:195
not ok 134 Got "1" instead of "0", LINENUM:197
FAILED COMMAND: 0 echo 1
not ok 135 , LINENUM:199
FAILED COMMAND: [ 6b2dcb5cc44a68fad62cf98a06fbf248 == ]
ok 136, LINENUM:201
Failed 22/136 subtests
Test Summary Report
-------------------
./tests/basic/afr/split-brain-favorite-child-policy.t (Wstat: 0 Tests: 136 Failed: 22)
Failed tests: 31-35, 38-41, 44-45, 57, 84, 90, 92-94
108, 125, 132, 134-135
Files=1, Tests=136, 428 wallclock secs ( 0.09 usr 0.04 sys + 19.11 cusr 31.26 csys = 50.50 CPU)
Result: FAIL
End of test ./tests/basic/afr/split-brain-favorite-child-policy.t
================================================================================
Run complete
================================================================================
Number of tests found: 37
Number of tests selected for run based on pattern: 37
Number of tests skipped as they were marked bad: 1
Number of tests skipped because of known_issues: 0
Number of tests that were run: 36
1 test(s) failed
./tests/basic/afr/split-brain-favorite-child-policy.t
0 test(s) generated core
Tests ordered by time taken, slowest to fastest:
================================================================================
./tests/basic/afr/split-brain-favorite-child-policy.t - 1193 second
./tests/basic/afr/entry-self-heal.t - 168 second
./tests/basic/afr/self-heal.t - 141 second
./tests/basic/afr/self-heald.t - 128 second
./tests/basic/afr/sparse-file-self-heal.t - 79 second
./tests/basic/afr/metadata-self-heal.t - 74 second
./tests/basic/afr/granular-esh/cli.t - 67 second
./tests/basic/afr/arbiter.t - 55 second
./tests/basic/afr/quorum.t - 52 second
./tests/basic/afr/inodelk.t - 49 second
./tests/basic/afr/granular-esh/conservative-merge.t - 36 second
./tests/basic/afr/gfid-self-heal.t - 34 second
./tests/basic/afr/data-self-heal.t - 30 second
./tests/basic/afr/durability-off.t - 28 second
./tests/basic/afr/arbiter-add-brick.t - 28 second
./tests/basic/afr/arbiter-mount.t - 26 second
./tests/basic/afr/read-subvol-data.t - 25 second
./tests/basic/afr/heal-quota.t - 25 second
./tests/basic/afr/replace-brick-self-heal.t - 24 second
./tests/basic/afr/read-subvol-entry.t - 24 second
./tests/basic/afr/granular-esh/replace-brick.t - 24 second
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t - 23 second
./tests/basic/afr/compounded-write-txns.t - 23 second
./tests/basic/afr/granular-esh/granular-esh.t - 22 second
./tests/basic/afr/arbiter-statfs.t - 22 second
./tests/basic/0symbol-check.t - 22 second
./tests/basic/afr/resolve.t - 20 second
./tests/basic/afr/granular-esh/add-brick.t - 20 second
./tests/basic/afr/add-brick-self-heal.t - 20 second
./tests/basic/afr/client-side-heal.t - 19 second
./tests/basic/afr/gfid-mismatch.t - 18 second
./tests/basic/afr/root-squash-self-heal.t - 16 second
./tests/basic/afr/arbiter-remove-brick.t - 12 second
./tests/basic/afr/gfid-heal.t - 11 second
./tests/basic/afr/heal-info.t - 10 second
./tests/basic/afr/arbiter-cli.t - 7 second
Result is 1
tar: Removing leading `/' from member names
Logs archived in http://slave22.cloud.gluster.org/logs/glusterfs-logs-regression-test-burn-in-3019.tgz
kernel.core_pattern = /%e-%p.core
Build step 'Execute shell' marked build as failure
Not sending mail to unregistered user amukherj at redhat.com
Not sending mail to unregistered user jeff at pl.atyp.us
Not sending mail to unregistered user avishwan at redhat.com
More information about the maintainers
mailing list