[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #1623
Atin Mukherjee
amukherj at redhat.com
Thu Sep 1 15:40:22 UTC 2016
This is strange, Ashish has already fixed the issue of the missing SSL
certs through fad93c1
>From the logs it definitely looks like glusterd failed to come up:
[2016-09-01 13:30:28.909555] E [socket.c:4122:socket_init]
0-socket.management: could not load our cert
On Thu, Sep 1, 2016 at 7:38 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> tests/bugs/cli/bug-1320388.t is failing quite frequently in
> regression-test burn in. Can you please take a look in?
>
> Thx,
> Vijay
>
> On Thu, Sep 1, 2016 at 9:33 AM, <jenkins at build.gluster.org> wrote:
> > See <http://build.gluster.org/job/regression-test-burn-in/1623/>
> >
> > ------------------------------------------
> > [...truncated 10509 lines...]
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > touch: cannot touch `/mnt/glusterfs/0/a': Transport endpoint is not
> connected
> > cat: /var/lib/glusterd/vols/patchy/run/slave28.cloud.gluster.org-d-backends-patchy5.pid:
> No such file or directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > ./tests/bugs/cli/bug-1320388.t: line 19: /mnt/glusterfs/0/a: Transport
> endpoint is not connected
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ...
> or kill -l [sigspec]
> > sed: read error on /var/run/gluster/: Is a directory
> > rm: cannot remove `/var/run/gluster/': Is a directory
> > ./tests/bugs/cli/bug-1320388.t ..
> > 1..11
> > not ok 1 , LINENUM:11
> > FAILED COMMAND: glusterd
> > not ok 2 , LINENUM:12
> > FAILED COMMAND: pidof glusterd
> > ok 3, LINENUM:13
> > ok 4, LINENUM:14
> > not ok 5 , LINENUM:15
> > FAILED COMMAND: glusterfs --entry-timeout=0 --attribute-timeout=0 -s
> slave28.cloud.gluster.org --volfile-id patchy /mnt/glusterfs/0
> > not ok 6 Got "" instead of "^6$", LINENUM:16
> > FAILED COMMAND: ^6$ ec_child_up_count patchy 0
> > not ok 7 , LINENUM:18
> > FAILED COMMAND: kill_brick patchy slave28.cloud.gluster.org
> /d/backends/patchy5
> > not ok 8 Got "" instead of "^5$", LINENUM:20
> > FAILED COMMAND: ^5$ get_pending_heal_count patchy
> > not ok 9 Got "" instead of "^6$", LINENUM:22
> > FAILED COMMAND: ^6$ ec_child_up_count patchy 0
> > ok 10, LINENUM:23
> > not ok 11 Got "" instead of "^0$", LINENUM:24
> > FAILED COMMAND: ^0$ get_pending_heal_count patchy
> > Failed 8/11 subtests
> >
> > Test Summary Report
> > -------------------
> > ./tests/bugs/cli/bug-1320388.t (Wstat: 0 Tests: 11 Failed: 8)
> > Failed tests: 1-2, 5-9, 11
> > Files=1, Tests=11, 202 wallclock secs ( 0.02 usr 0.01 sys + 12.55 cusr
> 3.28 csys = 15.86 CPU)
> > Result: FAIL
> > End of test ./tests/bugs/cli/bug-1320388.t
> > ============================================================
> ====================
> >
> >
> > Run complete
> > ============================================================
> ====================
> > Number of tests found: 159
> > Number of tests selected for run based on pattern: 159
> > Number of tests skipped as they were marked bad: 6
> > Number of tests skipped because of known_issues: 1
> > Number of tests that were run: 152
> >
> > 1 test(s) failed
> > ./tests/bugs/cli/bug-1320388.t
> >
> > 0 test(s) generated core
> >
> >
> > Tests ordered by time taken, slowest to fastest:
> > ============================================================
> ====================
> > ./tests/basic/afr/split-brain-favorite-child-policy.t - 578 second
> > ./tests/basic/ec/ec-12-4.t - 345 second
> > ./tests/basic/ec/ec-background-heals.t - 302 second
> > ./tests/bugs/cli/bug-1320388.t - 202 second
> > ./tests/basic/ec/ec-7-3.t - 191 second
> > ./tests/basic/ec/ec-6-2.t - 178 second
> > ./tests/basic/afr/entry-self-heal.t - 174 second
> > ./tests/basic/tier/tier-heald.t - 166 second
> > ./tests/basic/ec/ec-5-2.t - 152 second
> > ./tests/basic/ec/ec-5-1.t - 151 second
> > ./tests/basic/afr/self-heal.t - 151 second
> > ./tests/basic/glusterd/heald.t - 139 second
> > ./tests/basic/tier/tier.t - 132 second
> > ./tests/basic/ec/ec-4-1.t - 119 second
> > ./tests/basic/tier/legacy-many.t - 115 second
> > ./tests/basic/afr/granular-esh/conservative-merge.t - 114 second
> > ./tests/basic/ec/ec-root-heal.t - 111 second
> > ./tests/basic/afr/granular-esh/granular-esh.t - 100 second
> > ./tests/basic/afr/add-brick-self-heal.t - 100 second
> > ./tests/basic/afr/granular-esh/add-brick.t - 99 second
> > ./tests/basic/ec/ec-new-entry.t - 92 second
> > ./tests/basic/ec/ec-3-1.t - 92 second
> > ./tests/basic/afr/split-brain-heal-info.t - 88 second
> > ./tests/basic/afr/self-heald.t - 86 second
> > ./tests/basic/afr/split-brain-healing.t - 81 second
> > ./tests/basic/afr/metadata-self-heal.t - 76 second
> > ./tests/basic/quota.t - 74 second
> > ./tests/basic/ec/self-heal.t - 68 second
> > ./tests/basic/tier/new-tier-cmds.t - 65 second
> > ./tests/basic/tier/tierd_check.t - 64 second
> > ./tests/basic/afr/sparse-file-self-heal.t - 62 second
> > ./tests/basic/tier/frequency-counters.t - 61 second
> > ./tests/basic/volume-snapshot-clone.t - 55 second
> > ./tests/basic/uss.t - 52 second
> > ./tests/basic/ec/ec-notify.t - 47 second
> > ./tests/basic/tier/fops-during-migration-pause.t - 46 second
> > ./tests/basic/ec/ec-readdir.t - 44 second
> > ./tests/basic/volume-snapshot.t - 42 second
> > ./tests/basic/mount-nfs-auth.t - 40 second
> > ./tests/basic/tier/locked_file_migration.t - 38 second
> > ./tests/basic/ec/ec-anonymous-fd.t - 38 second
> > ./tests/basic/afr/arbiter.t - 38 second
> > ./tests/basic/tier/unlink-during-migration.t - 37 second
> > ./tests/basic/jbr/jbr.t - 37 second
> > ./tests/basic/ec/ec.t - 35 second
> > ./tests/basic/afr/data-self-heal.t - 35 second
> > ./tests/basic/mgmt_v3-locks.t - 34 second
> > ./tests/basic/afr/quorum.t - 31 second
> > ./tests/basic/quota-ancestry-building.t - 29 second
> > ./tests/basic/afr/arbiter-add-brick.t - 27 second
> > ./tests/basic/afr/durability-off.t - 26 second
> > ./tests/bitrot/bug-1294786.t - 24 second
> > ./tests/basic/tier/file_with_spaces.t - 24 second
> > ./tests/basic/afr/heal-quota.t - 24 second
> > ./tests/basic/afr/gfid-self-heal.t - 23 second
> > ./tests/basic/geo-replication/marker-xattrs.t - 22 second
> > ./tests/basic/ec/quota.t - 22 second
> > ./tests/bugs/bitrot/bug-1227996.t - 21 second
> > ./tests/basic/tier/readdir-during-migration.t - 21 second
> > ./tests/basic/op_errnos.t - 21 second
> > ./tests/basic/ec/statedump.t - 21 second
> > ./tests/basic/afr/replace-brick-self-heal.t - 21 second
> > ./tests/basic/glusterd/volfile_server_switch.t - 20 second
> > ./tests/basic/afr/granular-esh/replace-brick.t - 20 second
> > ./tests/basic/0symbol-check.t - 20 second
> > ./tests/bugs/cli/bug-1113476.t - 19 second
> > ./tests/bugs/changelog/bug-1225542.t - 19 second
> > ./tests/bugs/bitrot/bug-1245981.t - 18 second
> > ./tests/basic/afr/split-brain-resolution.t - 18 second
> > ./tests/bitrot/br-state-check.t - 17 second
> > ./tests/bugs/cli/bug-1077682.t - 16 second
> > ./tests/bugs/changelog/bug-1321955.t - 16 second
> > ./tests/basic/afr/client-side-heal.t - 16 second
> > ./tests/bugs/changelog/bug-1211327.t - 15 second
> > ./tests/bugs/bitrot/bug-1288490.t - 15 second
> > ./tests/bugs/bitrot/bug-1228680.t - 15 second
> > ./tests/basic/glusterd/arbiter-volume-probe.t - 15 second
> > ./tests/basic/bd.t - 15 second
> > ./tests/basic/afr/resolve.t - 15 second
> > ./tests/bugs/cli/bug-1047416.t - 14 second
> > ./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t - 14 second
> > ./tests/basic/volume-status.t - 14 second
> > ./tests/basic/glusterd/disperse-create.t - 14 second
> > ./tests/bugs/bug-1110262.t - 13 second
> > ./tests/basic/tier/ctr-rename-overwrite.t - 13 second
> > ./tests/basic/rpc-coverage.t - 13 second
> > ./tests/basic/pump.t - 13 second
> > ./tests/basic/cdc.t - 13 second
> > ./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t
> - 12 second
> > ./tests/basic/volume.t - 12 second
> > ./tests/basic/quota-nfs.t - 12 second
> > ./tests/basic/quota-anon-fd-nfs.t - 12 second
> > ./tests/basic/nufa.t - 12 second
> > ./tests/basic/inode-quota-enforcing.t - 12 second
> > ./tests/basic/afr/stale-file-lookup.t - 12 second
> > ./tests/basic/afr/root-squash-self-heal.t - 12 second
> > ./tests/basic/afr/read-subvol-data.t - 12 second
> > ./tests/bugs/cli/bug-1087487.t - 11 second
> > ./tests/bugs/cli/bug-1030580.t - 11 second
> > ./tests/bugs/changelog/bug-1208470.t - 11 second
> > ./tests/bugs/access-control/bug-958691.t - 11 second
> > ./tests/bugs/access-control/bug-887098-gmount-crash.t - 11 second
> > ./tests/bitrot/bug-1207627-bitrot-scrub-status.t - 11 second
> > ./tests/basic/stats-dump.t - 11 second
> > ./tests/basic/mount.t - 11 second
> > ./tests/basic/glusterd/arbiter-volume.t - 11 second
> > ./tests/basic/ec/ec-read-policy.t - 11 second
> > ./tests/basic/afr/arbiter-mount.t - 11 second
> > ./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t
> - 10 second
> > ./tests/basic/gfapi/bug1291259.t - 10 second
> > ./tests/basic/fop-sampling.t - 10 second
> > ./tests/basic/afr/heal-info.t - 10 second
> > ./tests/bugs/cli/bug-1022905.t - 9 second
> > ./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t -
> 9 second
> > ./tests/bitrot/bug-1244613.t - 9 second
> > ./tests/basic/meta.t - 9 second
> > ./tests/basic/afr/read-subvol-entry.t - 9 second
> > ./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t - 8 second
> > ./tests/basic/pgfid-feat.t - 8 second
> > ./tests/basic/gfapi/gfapi-trunc.t - 8 second
> > ./tests/basic/gfapi/bug-1241104.t - 8 second
> > ./tests/basic/ec/ec-internal-xattrs.t - 8 second
> > ./tests/basic/distribute/bug-1265677-use-readdirp.t - 8 second
> > ./tests/basic/afr/gfid-mismatch.t - 8 second
> > ./tests/basic/afr/gfid-heal.t - 8 second
> > ./tests/basic/afr/arbiter-statfs.t - 8 second
> > ./tests/basic/afr/arbiter-remove-brick.t - 8 second
> > ./tests/bugs/bug-1258069.t - 7 second
> > ./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t -
> 7 second
> > ./tests/bitrot/bug-internal-xattrs-check-1243391.t - 7 second
> > ./tests/bitrot/bug-1221914.t - 7 second
> > ./tests/bitrot/br-stub.t - 7 second
> > ./tests/basic/quota-rename.t - 7 second
> > ./tests/basic/gfapi/gfapi-dup.t - 7 second
> > ./tests/basic/gfapi/anonymous_fd.t - 7 second
> > ./tests/basic/ec/dht-rename.t - 7 second
> > ./tests/basic/distribute/throttle-rebal.t - 7 second
> > ./tests/bugs/cli/bug-1004218.t - 6 second
> > ./tests/bugs/access-control/bug-1051896.t - 6 second
> > ./tests/basic/jbr/jbr-volgen.t - 6 second
> > ./tests/basic/gfid-access.t - 6 second
> > ./tests/basic/gfapi/libgfapi-fini-hang.t - 6 second
> > ./tests/basic/fops-sanity.t - 6 second
> > ./tests/basic/ec/nfs.t - 6 second
> > ./tests/bugs/cli/bug-1047378.t - 5 second
> > ./tests/basic/afr/arbiter-cli.t - 5 second
> > ./tests/basic/rpm.t - 2 second
> > ./tests/basic/posixonly.t - 1 second
> > ./tests/basic/netgroup_parsing.t - 1 second
> > ./tests/basic/gfapi/upcall-cache-invalidate.t - 1 second
> > ./tests/basic/exports_parsing.t - 1 second
> > ./tests/basic/first-test.t - 0 second
> >
> > Result is 1
> >
> > tar: Removing leading `/' from member names
> > Logs archived in http://slave28.cloud.gluster.org/logs/glusterfs-logs-
> 20160901:11:46:33.tgz
> > kernel.core_pattern = /%e-%p.core
> > Build step 'Execute shell' marked build as failure
> > _______________________________________________
> > maintainers mailing list
> > maintainers at gluster.org
> > http://www.gluster.org/mailman/listinfo/maintainers
> _______________________________________________
> maintainers mailing list
> maintainers at gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
--
--Atin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/maintainers/attachments/20160901/a7e921c0/attachment-0001.html>
More information about the maintainers
mailing list