From ci at centos.org Thu Aug 1 00:16:50 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 1 Aug 2019 00:16:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #442 In-Reply-To: <1913422578.1683.1564532204780.JavaMail.jenkins@jenkins.ci.centos.org> References: <1913422578.1683.1564532204780.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1997295368.1777.1564618610586.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2153 0 --:--:-- --:--:-- --:--:-- 2160 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 10.5M 0 --:--:-- --:--:-- --:--:-- 21.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1986 0 --:--:-- --:--:-- --:--:-- 1990 9 38.3M 9 3796k 0 0 6895k 0 0:00:05 --:--:-- 0:00:05 6895k100 38.3M 100 38.3M 0 0 38.7M 0 --:--:-- --:--:-- --:--:-- 78.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 557 0 --:--:-- --:--:-- --:--:-- 558 0 0 0 620 0 0 1638 0 --:--:-- --:--:-- --:--:-- 1638 92 10.7M 92 9.9M 0 0 9197k 0 0:00:01 0:00:01 --:--:-- 9197k100 10.7M 100 10.7M 0 0 9582k 0 0:00:01 0:00:01 --:--:-- 19.6M ~/nightlyrpm5Mkox3/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm5Mkox3/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm5Mkox3/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm5Mkox3 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm5Mkox3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm5Mkox3/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3e15d51a78504543bb2074e6b8bef8e9 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.y4i1mm8t:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4901625483296076087.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 929423db +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 91 | n27.pufty | 172.19.3.91 | pufty | 3819 | Deployed | 929423db | None | None | 7 | x86_64 | 1 | 2260 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 1 00:41:00 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 1 Aug 2019 00:41:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #246 In-Reply-To: <506954699.1684.1564533733656.JavaMail.jenkins@jenkins.ci.centos.org> References: <506954699.1684.1564533733656.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1142073701.1779.1564620060290.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.06 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 01 August 2019 01:40:16 +0100 (0:00:00.293) 0:03:01.656 ******* TASK [container-engine/docker : check length of search domains] **************** Thursday 01 August 2019 01:40:16 +0100 (0:00:00.300) 0:03:01.957 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 01 August 2019 01:40:17 +0100 (0:00:00.346) 0:03:02.303 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 01 August 2019 01:40:17 +0100 (0:00:00.294) 0:03:02.598 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 01 August 2019 01:40:18 +0100 (0:00:00.657) 0:03:03.256 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 01 August 2019 01:40:19 +0100 (0:00:01.305) 0:03:04.562 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 01 August 2019 01:40:19 +0100 (0:00:00.257) 0:03:04.819 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 01 August 2019 01:40:20 +0100 (0:00:00.268) 0:03:05.088 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 01 August 2019 01:40:20 +0100 (0:00:00.375) 0:03:05.464 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 01 August 2019 01:40:20 +0100 (0:00:00.342) 0:03:05.807 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 01 August 2019 01:40:21 +0100 (0:00:00.279) 0:03:06.086 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 01 August 2019 01:40:21 +0100 (0:00:00.349) 0:03:06.436 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 01 August 2019 01:40:21 +0100 (0:00:00.351) 0:03:06.787 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 01 August 2019 01:40:22 +0100 (0:00:00.335) 0:03:07.122 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 01 August 2019 01:40:22 +0100 (0:00:00.388) 0:03:07.511 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 01 August 2019 01:40:22 +0100 (0:00:00.328) 0:03:07.839 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 01 August 2019 01:40:23 +0100 (0:00:00.304) 0:03:08.144 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 01 August 2019 01:40:23 +0100 (0:00:00.300) 0:03:08.444 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 01 August 2019 01:40:23 +0100 (0:00:00.331) 0:03:08.775 ******* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 01 August 2019 01:40:25 +0100 (0:00:02.011) 0:03:10.787 ******* ok: [kube3] ok: [kube1] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 01 August 2019 01:40:26 +0100 (0:00:01.079) 0:03:11.866 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 01 August 2019 01:40:27 +0100 (0:00:00.304) 0:03:12.171 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 01 August 2019 01:40:28 +0100 (0:00:01.047) 0:03:13.219 ******* TASK [container-engine/docker : get systemd version] *************************** Thursday 01 August 2019 01:40:28 +0100 (0:00:00.313) 0:03:13.532 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 01 August 2019 01:40:28 +0100 (0:00:00.317) 0:03:13.850 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 01 August 2019 01:40:29 +0100 (0:00:00.417) 0:03:14.268 ******* changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 01 August 2019 01:40:31 +0100 (0:00:02.186) 0:03:16.454 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 01 August 2019 01:40:33 +0100 (0:00:02.065) 0:03:18.519 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 01 August 2019 01:40:33 +0100 (0:00:00.380) 0:03:18.900 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 01 August 2019 01:40:34 +0100 (0:00:00.255) 0:03:19.156 ******* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 01 August 2019 01:40:36 +0100 (0:00:01.854) 0:03:21.011 ******* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 01 August 2019 01:40:37 +0100 (0:00:01.213) 0:03:22.224 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 01 August 2019 01:40:37 +0100 (0:00:00.371) 0:03:22.596 ******* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 01 August 2019 01:40:41 +0100 (0:00:04.164) 0:03:26.760 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 01 August 2019 01:40:51 +0100 (0:00:10.211) 0:03:36.972 ******* changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 01 August 2019 01:40:53 +0100 (0:00:01.254) 0:03:38.227 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 01 August 2019 01:40:54 +0100 (0:00:01.202) 0:03:39.429 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 01 August 2019 01:40:54 +0100 (0:00:00.525) 0:03:39.955 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 01 August 2019 01:40:56 +0100 (0:00:01.034) 0:03:40.990 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 01 August 2019 01:40:56 +0100 (0:00:00.957) 0:03:41.947 ******* TASK [download : Download items] *********************************************** Thursday 01 August 2019 01:40:57 +0100 (0:00:00.138) 0:03:42.086 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 01 August 2019 01:40:59 +0100 (0:00:02.736) 0:03:44.822 ******* =============================================================================== Install packages ------------------------------------------------------- 35.14s Wait for host to be available ------------------------------------------ 21.58s gather facts from all instances ---------------------------------------- 17.05s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.37s container-engine/docker : Docker | reload docker ------------------------ 4.16s kubernetes/preinstall : Create kubernetes directories ------------------- 4.15s download : Download items ----------------------------------------------- 2.74s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.67s kubernetes/preinstall : Create cni directories -------------------------- 2.65s Load required kernel modules -------------------------------------------- 2.63s Extend root VG ---------------------------------------------------------- 2.59s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.54s Gathering Facts --------------------------------------------------------- 2.30s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.22s container-engine/docker : Write docker options systemd drop-in ---------- 2.19s download : Sync container ----------------------------------------------- 2.10s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.09s download : Download items ----------------------------------------------- 2.07s container-engine/docker : Write docker dns systemd drop-in -------------- 2.07s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 1 01:18:21 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 1 Aug 2019 01:18:21 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #271 In-Reply-To: <215033897.1689.1564535706466.JavaMail.jenkins@jenkins.ci.centos.org> References: <215033897.1689.1564535706466.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <743884876.1785.1564622301707.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.21 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 2 00:14:26 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 2 Aug 2019 00:14:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #443 In-Reply-To: <1997295368.1777.1564618610586.JavaMail.jenkins@jenkins.ci.centos.org> References: <1997295368.1777.1564618610586.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1050301423.1926.1564704866529.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2477 0 --:--:-- --:--:-- --:--:-- 2469 100 8513k 100 8513k 0 0 18.7M 0 --:--:-- --:--:-- --:--:-- 18.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2351 0 --:--:-- --:--:-- --:--:-- 2357 65 38.3M 65 25.1M 0 0 37.2M 0 0:00:01 --:--:-- 0:00:01 37.2M100 38.3M 100 38.3M 0 0 48.4M 0 --:--:-- --:--:-- --:--:-- 112M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 900 0 --:--:-- --:--:-- --:--:-- 900 0 0 0 620 0 0 2359 0 --:--:-- --:--:-- --:--:-- 2359 100 10.7M 100 10.7M 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M ~/nightlyrpmSwtAXb/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmSwtAXb/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmSwtAXb/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmSwtAXb ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmSwtAXb/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmSwtAXb/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 33 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 60f35769724f41f69c73b0131475b259 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.nu2m65kp:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins146293530857984946.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done f049e8e1 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 211 | n20.dusty | 172.19.2.84 | dusty | 3837 | Deployed | f049e8e1 | None | None | 7 | x86_64 | 1 | 2190 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 2 00:37:55 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 2 Aug 2019 00:37:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #247 In-Reply-To: <1142073701.1779.1564620060290.JavaMail.jenkins@jenkins.ci.centos.org> References: <1142073701.1779.1564620060290.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1651489082.1927.1564706275295.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.96 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 02 August 2019 01:37:29 +0100 (0:00:00.132) 0:02:01.857 ********* TASK [container-engine/docker : check length of search domains] **************** Friday 02 August 2019 01:37:29 +0100 (0:00:00.135) 0:02:01.992 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Friday 02 August 2019 01:37:29 +0100 (0:00:00.130) 0:02:02.123 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 02 August 2019 01:37:29 +0100 (0:00:00.130) 0:02:02.254 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 02 August 2019 01:37:29 +0100 (0:00:00.255) 0:02:02.509 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 02 August 2019 01:37:30 +0100 (0:00:00.636) 0:02:03.145 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 02 August 2019 01:37:30 +0100 (0:00:00.113) 0:02:03.259 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 02 August 2019 01:37:30 +0100 (0:00:00.113) 0:02:03.373 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 02 August 2019 01:37:30 +0100 (0:00:00.145) 0:02:03.519 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 02 August 2019 01:37:30 +0100 (0:00:00.136) 0:02:03.655 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 02 August 2019 01:37:30 +0100 (0:00:00.127) 0:02:03.783 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 02 August 2019 01:37:31 +0100 (0:00:00.122) 0:02:03.905 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 02 August 2019 01:37:31 +0100 (0:00:00.130) 0:02:04.035 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 02 August 2019 01:37:31 +0100 (0:00:00.131) 0:02:04.167 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 02 August 2019 01:37:31 +0100 (0:00:00.161) 0:02:04.328 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 02 August 2019 01:37:31 +0100 (0:00:00.153) 0:02:04.481 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 02 August 2019 01:37:31 +0100 (0:00:00.130) 0:02:04.612 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 02 August 2019 01:37:31 +0100 (0:00:00.125) 0:02:04.737 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 02 August 2019 01:37:32 +0100 (0:00:00.128) 0:02:04.865 ********* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 02 August 2019 01:37:32 +0100 (0:00:00.903) 0:02:05.769 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 02 August 2019 01:37:33 +0100 (0:00:00.492) 0:02:06.262 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 02 August 2019 01:37:33 +0100 (0:00:00.127) 0:02:06.390 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 02 August 2019 01:37:34 +0100 (0:00:00.496) 0:02:06.886 ********* TASK [container-engine/docker : get systemd version] *************************** Friday 02 August 2019 01:37:34 +0100 (0:00:00.137) 0:02:07.024 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 02 August 2019 01:37:34 +0100 (0:00:00.134) 0:02:07.159 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 02 August 2019 01:37:34 +0100 (0:00:00.134) 0:02:07.294 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 02 August 2019 01:37:35 +0100 (0:00:01.008) 0:02:08.302 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 02 August 2019 01:37:36 +0100 (0:00:00.941) 0:02:09.243 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 02 August 2019 01:37:36 +0100 (0:00:00.141) 0:02:09.384 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 02 August 2019 01:37:36 +0100 (0:00:00.108) 0:02:09.493 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 02 August 2019 01:37:37 +0100 (0:00:00.898) 0:02:10.391 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 02 August 2019 01:37:38 +0100 (0:00:00.546) 0:02:10.937 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 02 August 2019 01:37:38 +0100 (0:00:00.126) 0:02:11.064 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 02 August 2019 01:37:41 +0100 (0:00:03.014) 0:02:14.078 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 02 August 2019 01:37:51 +0100 (0:00:10.099) 0:02:24.177 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 02 August 2019 01:37:51 +0100 (0:00:00.524) 0:02:24.701 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 02 August 2019 01:37:52 +0100 (0:00:00.619) 0:02:25.321 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 02 August 2019 01:37:52 +0100 (0:00:00.226) 0:02:25.547 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 02 August 2019 01:37:53 +0100 (0:00:00.485) 0:02:26.033 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 02 August 2019 01:37:53 +0100 (0:00:00.448) 0:02:26.481 ********* TASK [download : Download items] *********************************************** Friday 02 August 2019 01:37:53 +0100 (0:00:00.063) 0:02:26.545 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 02 August 2019 01:37:55 +0100 (0:00:01.346) 0:02:27.891 ********* =============================================================================== Install packages ------------------------------------------------------- 26.09s Extend root VG --------------------------------------------------------- 16.46s Wait for host to be available ------------------------------------------ 16.19s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s gather facts from all instances ---------------------------------------- 10.07s Persist loaded modules -------------------------------------------------- 3.42s container-engine/docker : Docker | reload docker ------------------------ 3.01s kubernetes/preinstall : Create kubernetes directories ------------------- 1.84s Extend the root LV and FS to occupy remaining space --------------------- 1.70s Load required kernel modules -------------------------------------------- 1.68s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.59s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.41s download : Download items ----------------------------------------------- 1.35s Gathering Facts --------------------------------------------------------- 1.33s download : Download items ----------------------------------------------- 1.30s download : Sync container ----------------------------------------------- 1.24s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.20s kubernetes/preinstall : Create cni directories -------------------------- 1.15s bootstrap-os : check if atomic host ------------------------------------- 1.13s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.10s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 2 01:18:38 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 2 Aug 2019 01:18:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #272 In-Reply-To: <743884876.1785.1564622301707.JavaMail.jenkins@jenkins.ci.centos.org> References: <743884876.1785.1564622301707.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1695768255.1936.1564708718934.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.21 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 3 00:14:59 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 00:14:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #444 In-Reply-To: <1050301423.1926.1564704866529.JavaMail.jenkins@jenkins.ci.centos.org> References: <1050301423.1926.1564704866529.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1431366258.2054.1564791299441.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 53.55 KB...] Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.ci.centos.org/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.vcu.edu/pub/gnu%2Blinux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://epel.mirror.constant.com/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.mit.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.steadfastnet.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.uic.edu/EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.twinlakes.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. One of the configured repositories failed (Extra Packages for Enterprise Linux 7 - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=epel ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable epel or subscription-manager repos --disable=epel 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=epel.skip_if_unavailable=true failure: repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try. http://mirror.ci.centos.org/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.vcu.edu/pub/gnu+linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://epel.mirror.constant.com/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.mit.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.steadfastnet.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.uic.edu/EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.coastal.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.twinlakes.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.nodesdirect.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.sonic.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.westmancom.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4812766810139303300.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 299634d3 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 169 | n42.crusty | 172.19.2.42 | crusty | 3851 | Deployed | 299634d3 | None | None | 7 | x86_64 | 1 | 2410 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 3 00:41:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 00:41:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #248 In-Reply-To: <1651489082.1927.1564706275295.JavaMail.jenkins@jenkins.ci.centos.org> References: <1651489082.1927.1564706275295.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2086647811.2060.1564792861206.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.02 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 03 August 2019 01:40:18 +0100 (0:00:00.299) 0:03:01.899 ******* TASK [container-engine/docker : check length of search domains] **************** Saturday 03 August 2019 01:40:18 +0100 (0:00:00.302) 0:03:02.201 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 03 August 2019 01:40:18 +0100 (0:00:00.295) 0:03:02.496 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 03 August 2019 01:40:18 +0100 (0:00:00.289) 0:03:02.786 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 03 August 2019 01:40:19 +0100 (0:00:00.629) 0:03:03.416 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 03 August 2019 01:40:20 +0100 (0:00:01.366) 0:03:04.782 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 03 August 2019 01:40:21 +0100 (0:00:00.272) 0:03:05.055 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 03 August 2019 01:40:21 +0100 (0:00:00.261) 0:03:05.316 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 03 August 2019 01:40:21 +0100 (0:00:00.306) 0:03:05.623 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 03 August 2019 01:40:22 +0100 (0:00:00.313) 0:03:05.937 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 03 August 2019 01:40:22 +0100 (0:00:00.292) 0:03:06.229 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 03 August 2019 01:40:22 +0100 (0:00:00.294) 0:03:06.524 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 03 August 2019 01:40:22 +0100 (0:00:00.289) 0:03:06.814 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 03 August 2019 01:40:23 +0100 (0:00:00.292) 0:03:07.106 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 03 August 2019 01:40:23 +0100 (0:00:00.355) 0:03:07.461 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 03 August 2019 01:40:23 +0100 (0:00:00.335) 0:03:07.797 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 03 August 2019 01:40:24 +0100 (0:00:00.276) 0:03:08.073 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 03 August 2019 01:40:24 +0100 (0:00:00.286) 0:03:08.359 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 03 August 2019 01:40:24 +0100 (0:00:00.291) 0:03:08.651 ******* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 03 August 2019 01:40:26 +0100 (0:00:01.999) 0:03:10.650 ******* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 03 August 2019 01:40:27 +0100 (0:00:01.121) 0:03:11.771 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 03 August 2019 01:40:28 +0100 (0:00:00.419) 0:03:12.191 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 03 August 2019 01:40:29 +0100 (0:00:01.062) 0:03:13.253 ******* TASK [container-engine/docker : get systemd version] *************************** Saturday 03 August 2019 01:40:29 +0100 (0:00:00.310) 0:03:13.564 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 03 August 2019 01:40:30 +0100 (0:00:00.310) 0:03:13.874 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 03 August 2019 01:40:30 +0100 (0:00:00.320) 0:03:14.195 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 03 August 2019 01:40:32 +0100 (0:00:02.065) 0:03:16.260 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 03 August 2019 01:40:34 +0100 (0:00:02.059) 0:03:18.320 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 03 August 2019 01:40:34 +0100 (0:00:00.307) 0:03:18.627 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 03 August 2019 01:40:35 +0100 (0:00:00.239) 0:03:18.867 ******* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 03 August 2019 01:40:36 +0100 (0:00:01.924) 0:03:20.792 ******* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 03 August 2019 01:40:38 +0100 (0:00:01.138) 0:03:21.930 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 03 August 2019 01:40:38 +0100 (0:00:00.276) 0:03:22.206 ******* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 03 August 2019 01:40:42 +0100 (0:00:04.019) 0:03:26.226 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 03 August 2019 01:40:52 +0100 (0:00:10.217) 0:03:36.443 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 03 August 2019 01:40:53 +0100 (0:00:01.237) 0:03:37.680 ******* ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 03 August 2019 01:40:55 +0100 (0:00:01.295) 0:03:38.976 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 03 August 2019 01:40:55 +0100 (0:00:00.524) 0:03:39.501 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 03 August 2019 01:40:56 +0100 (0:00:01.102) 0:03:40.603 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 03 August 2019 01:40:57 +0100 (0:00:01.147) 0:03:41.750 ******* TASK [download : Download items] *********************************************** Saturday 03 August 2019 01:40:58 +0100 (0:00:00.144) 0:03:41.895 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 03 August 2019 01:41:00 +0100 (0:00:02.771) 0:03:44.667 ******* =============================================================================== Install packages ------------------------------------------------------- 35.53s Wait for host to be available ------------------------------------------ 21.71s gather facts from all instances ---------------------------------------- 16.64s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.19s kubernetes/preinstall : Create kubernetes directories ------------------- 4.06s container-engine/docker : Docker | reload docker ------------------------ 4.02s download : Download items ----------------------------------------------- 2.77s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.72s Load required kernel modules -------------------------------------------- 2.62s Extend root VG ---------------------------------------------------------- 2.57s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.56s kubernetes/preinstall : Create cni directories -------------------------- 2.55s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.30s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.17s download : Download items ----------------------------------------------- 2.15s Gathering Facts --------------------------------------------------------- 2.07s container-engine/docker : Write docker options systemd drop-in ---------- 2.07s container-engine/docker : Write docker dns systemd drop-in -------------- 2.06s download : Sync container ----------------------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 3 00:55:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 00:55:01 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10763 - Failure! (master on CentOS-7/x86_64) Message-ID: <2112994816.2064.1564793701478.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10763 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10763/ to view the results. From ci at centos.org Sat Aug 3 00:56:46 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 00:56:46 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10766 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <1937430641.2067.1564793807161.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10766 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10766/ to view the results. From ci at centos.org Sat Aug 3 01:02:31 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 01:02:31 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10768 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <354436240.2072.1564794151888.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10768 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10768/ to view the results. From ci at centos.org Sat Aug 3 01:02:47 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 01:02:47 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10769 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <354436240.2072.1564794151888.JavaMail.jenkins@jenkins.ci.centos.org> References: <354436240.2072.1564794151888.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <775790700.2074.1564794168391.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10769 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10769/ to view the results. From ci at centos.org Sat Aug 3 01:09:26 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 3 Aug 2019 01:09:26 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #273 In-Reply-To: <1695768255.1936.1564708718934.JavaMail.jenkins@jenkins.ci.centos.org> References: <1695768255.1936.1564708718934.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1694754329.2075.1564794566200.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 60.45 KB...] https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.twinlakes.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. One of the configured repositories failed (Extra Packages for Enterprise Linux 7 - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=epel ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable epel or subscription-manager repos --disable=epel 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=epel.skip_if_unavailable=true failure: repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try. http://mirror.ci.centos.org/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.vcu.edu/pub/gnu+linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://epel.mirror.constant.com/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.mit.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.steadfastnet.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.uic.edu/EPEL/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.coastal.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.twinlakes.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.oss.ou.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.nodesdirect.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.sonic.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.westmancom.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.compevo.com/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: [Errno 12] Timeout on https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/2a2ea7d5fed1126e05a4c0186bba5e6f52393f14630998c4a21973e3c32f5e4c-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') ./gluster-ansible-infra/tests/run-centos-ci.sh: line 8: virtualenv: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 9: env/bin/activate: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 12: pip: command not found Loaded plugins: fastestmirror adding repo from: https://download.docker.com/linux/centos/docker-ce.repo grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo repo saved to /etc/yum.repos.d/docker-ce.repo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror3.ci.centos.org * epel: mirror.ci.centos.org * extras: mirror3.ci.centos.org * updates: mirror3.ci.centos.org http://mirror.ci.centos.org/epel/7/x86_64/repodata/0ebe68eeba23415ae539e4260af6f98e93b12ee4d7db2ee7005545df7cf2783f-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below wiki article https://wiki.centos.org/yum-errors If above article doesn't help to resolve this issue please use https://bugs.centos.org/. Resolving Dependencies --> Running transaction check ---> Package docker-ce.x86_64 3:19.03.1-3.el7 will be installed --> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Processing Dependency: containerd.io >= 1.2.2-3 for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Processing Dependency: docker-ce-cli for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Running transaction check ---> Package container-selinux.noarch 2:2.99-1.el7_6 will be installed ---> Package containerd.io.x86_64 0:1.2.6-3.3.el7 will be installed ---> Package docker-ce-cli.x86_64 1:19.03.1-3.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: docker-ce x86_64 3:19.03.1-3.el7 docker-ce-stable 24 M Installing for dependencies: container-selinux noarch 2:2.99-1.el7_6 extras 39 k containerd.io x86_64 1.2.6-3.3.el7 docker-ce-stable 26 M docker-ce-cli x86_64 1:19.03.1-3.el7 docker-ce-stable 39 M Transaction Summary ================================================================================ Install 1 Package (+3 Dependent packages) Total download size: 90 M Installed size: 368 M Downloading packages: warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Public key for containerd.io-1.2.6-3.3.el7.x86_64.rpm is not installed -------------------------------------------------------------------------------- Total 56 MB/s | 90 MB 00:01 Retrieving key from https://download.docker.com/linux/centos/gpg Importing GPG key 0x621E9F35: Userid : "Docker Release (CE rpm) " Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35 From : https://download.docker.com/linux/centos/gpg Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 2:container-selinux-2.99-1.el7_6.noarch 1/4 Installing : containerd.io-1.2.6-3.3.el7.x86_64 2/4 Installing : 1:docker-ce-cli-19.03.1-3.el7.x86_64 3/4 Installing : 3:docker-ce-19.03.1-3.el7.x86_64 4/4 Verifying : 1:docker-ce-cli-19.03.1-3.el7.x86_64 1/4 Verifying : 3:docker-ce-19.03.1-3.el7.x86_64 2/4 Verifying : containerd.io-1.2.6-3.3.el7.x86_64 3/4 Verifying : 2:container-selinux-2.99-1.el7_6.noarch 4/4 Installed: docker-ce.x86_64 3:19.03.1-3.el7 Dependency Installed: container-selinux.noarch 2:2.99-1.el7_6 containerd.io.x86_64 0:1.2.6-3.3.el7 docker-ce-cli.x86_64 1:19.03.1-3.el7 Complete! Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. ./gluster-ansible-infra/tests/run-centos-ci.sh: line 26: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 27: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 30: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 31: molecule: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 4 00:16:55 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 4 Aug 2019 00:16:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #445 In-Reply-To: <1431366258.2054.1564791299441.JavaMail.jenkins@jenkins.ci.centos.org> References: <1431366258.2054.1564791299441.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <503175034.2199.1564877815883.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.99 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1654 0 --:--:-- --:--:-- --:--:-- 1662 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 14.1M 0 --:--:-- --:--:-- --:--:-- 62.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2200 0 --:--:-- --:--:-- --:--:-- 2207 61 38.3M 61 23.6M 0 0 27.9M 0 0:00:01 --:--:-- 0:00:01 27.9M100 38.3M 100 38.3M 0 0 28.2M 0 0:00:01 0:00:01 --:--:-- 28.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 562 0 --:--:-- --:--:-- --:--:-- 562 0 0 0 620 0 0 1593 0 --:--:-- --:--:-- --:--:-- 1593 88 10.7M 88 9706k 0 0 9294k 0 0:00:01 0:00:01 --:--:-- 9294k100 10.7M 100 10.7M 0 0 9591k 0 0:00:01 0:00:01 --:--:-- 12.4M ~/nightlyrpmMyM9UW/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmMyM9UW/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmMyM9UW/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmMyM9UW ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmMyM9UW/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmMyM9UW/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d3416aca238743429cd9fdb383bb1f38 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.99r6pcnc:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1029799312030768405.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 5b787aa9 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 177 | n50.crusty | 172.19.2.50 | crusty | 3859 | Deployed | 5b787aa9 | None | None | 7 | x86_64 | 1 | 2490 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 4 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 4 Aug 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #249 In-Reply-To: <2086647811.2060.1564792861206.JavaMail.jenkins@jenkins.ci.centos.org> References: <2086647811.2060.1564792861206.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1734496983.2201.1564879254644.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.91 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 04 August 2019 01:40:11 +0100 (0:00:00.353) 0:03:01.669 ********* TASK [container-engine/docker : check length of search domains] **************** Sunday 04 August 2019 01:40:11 +0100 (0:00:00.317) 0:03:01.986 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 04 August 2019 01:40:11 +0100 (0:00:00.317) 0:03:02.304 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 04 August 2019 01:40:11 +0100 (0:00:00.293) 0:03:02.597 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 04 August 2019 01:40:12 +0100 (0:00:00.589) 0:03:03.186 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 04 August 2019 01:40:13 +0100 (0:00:01.300) 0:03:04.487 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 04 August 2019 01:40:14 +0100 (0:00:00.271) 0:03:04.759 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 04 August 2019 01:40:14 +0100 (0:00:00.270) 0:03:05.029 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 04 August 2019 01:40:14 +0100 (0:00:00.305) 0:03:05.335 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 04 August 2019 01:40:15 +0100 (0:00:00.375) 0:03:05.711 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 04 August 2019 01:40:15 +0100 (0:00:00.292) 0:03:06.003 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 04 August 2019 01:40:15 +0100 (0:00:00.286) 0:03:06.290 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 04 August 2019 01:40:15 +0100 (0:00:00.294) 0:03:06.585 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 04 August 2019 01:40:16 +0100 (0:00:00.287) 0:03:06.873 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 04 August 2019 01:40:16 +0100 (0:00:00.370) 0:03:07.243 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 04 August 2019 01:40:16 +0100 (0:00:00.334) 0:03:07.578 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 04 August 2019 01:40:17 +0100 (0:00:00.277) 0:03:07.855 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 04 August 2019 01:40:17 +0100 (0:00:00.275) 0:03:08.131 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 04 August 2019 01:40:17 +0100 (0:00:00.278) 0:03:08.409 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 04 August 2019 01:40:19 +0100 (0:00:01.989) 0:03:10.399 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 04 August 2019 01:40:20 +0100 (0:00:01.025) 0:03:11.425 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 04 August 2019 01:40:21 +0100 (0:00:00.290) 0:03:11.715 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 04 August 2019 01:40:22 +0100 (0:00:01.068) 0:03:12.783 ********* TASK [container-engine/docker : get systemd version] *************************** Sunday 04 August 2019 01:40:22 +0100 (0:00:00.340) 0:03:13.124 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 04 August 2019 01:40:22 +0100 (0:00:00.316) 0:03:13.440 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 04 August 2019 01:40:23 +0100 (0:00:00.318) 0:03:13.759 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 04 August 2019 01:40:25 +0100 (0:00:02.472) 0:03:16.231 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 04 August 2019 01:40:27 +0100 (0:00:02.257) 0:03:18.488 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 04 August 2019 01:40:28 +0100 (0:00:00.352) 0:03:18.841 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 04 August 2019 01:40:28 +0100 (0:00:00.241) 0:03:19.082 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 04 August 2019 01:40:30 +0100 (0:00:01.946) 0:03:21.029 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 04 August 2019 01:40:31 +0100 (0:00:01.120) 0:03:22.150 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 04 August 2019 01:40:31 +0100 (0:00:00.287) 0:03:22.438 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 04 August 2019 01:40:35 +0100 (0:00:04.075) 0:03:26.514 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 04 August 2019 01:40:46 +0100 (0:00:10.231) 0:03:36.745 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 04 August 2019 01:40:47 +0100 (0:00:01.138) 0:03:37.884 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 04 August 2019 01:40:48 +0100 (0:00:01.333) 0:03:39.217 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 04 August 2019 01:40:49 +0100 (0:00:00.524) 0:03:39.742 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 04 August 2019 01:40:50 +0100 (0:00:01.058) 0:03:40.800 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 04 August 2019 01:40:51 +0100 (0:00:01.110) 0:03:41.910 ********* TASK [download : Download items] *********************************************** Sunday 04 August 2019 01:40:51 +0100 (0:00:00.173) 0:03:42.083 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 04 August 2019 01:40:54 +0100 (0:00:02.787) 0:03:44.871 ********* =============================================================================== Install packages ------------------------------------------------------- 36.49s Wait for host to be available ------------------------------------------ 21.70s gather facts from all instances ---------------------------------------- 16.51s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 5.97s container-engine/docker : Docker | reload docker ------------------------ 4.08s kubernetes/preinstall : Create kubernetes directories ------------------- 4.07s download : Download items ----------------------------------------------- 2.79s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.66s Load required kernel modules -------------------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.56s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.51s container-engine/docker : Write docker options systemd drop-in ---------- 2.47s Extend root VG ---------------------------------------------------------- 2.39s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.31s container-engine/docker : Write docker dns systemd drop-in -------------- 2.26s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.22s Gathering Facts --------------------------------------------------------- 2.12s download : Sync container ----------------------------------------------- 2.05s download : Download items ----------------------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 4 01:24:04 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 4 Aug 2019 01:24:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #274 In-Reply-To: <1694754329.2075.1564794566200.JavaMail.jenkins@jenkins.ci.centos.org> References: <1694754329.2075.1564794566200.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1658276098.2206.1564881844510.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.20 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 5 00:16:51 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 5 Aug 2019 00:16:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #446 In-Reply-To: <503175034.2199.1564877815883.JavaMail.jenkins@jenkins.ci.centos.org> References: <503175034.2199.1564877815883.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <15997938.2298.1564964211918.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.00 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1896 0 --:--:-- --:--:-- --:--:-- 1902 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 10.4M 0 --:--:-- --:--:-- --:--:-- 17.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2207 0 --:--:-- --:--:-- --:--:-- 2215 13 38.3M 13 5189k 0 0 9.7M 0 0:00:03 --:--:-- 0:00:03 9.7M100 38.3M 100 38.3M 0 0 42.6M 0 --:--:-- --:--:-- --:--:-- 87.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 622 0 --:--:-- --:--:-- --:--:-- 621 0 0 0 620 0 0 1541 0 --:--:-- --:--:-- --:--:-- 1541 100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 15.4M ~/nightlyrpmygUwiT/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmygUwiT/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmygUwiT/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmygUwiT ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmygUwiT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmygUwiT/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 5f6d6d0953344bf4be872121b3d7e01d -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.hi5n5c6w:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2715229300813105242.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2d8cdcd1 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 110 | n46.pufty | 172.19.3.110 | pufty | 3862 | Deployed | 2d8cdcd1 | None | None | 7 | x86_64 | 1 | 2450 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 5 00:41:02 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 5 Aug 2019 00:41:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #250 In-Reply-To: <1734496983.2201.1564879254644.JavaMail.jenkins@jenkins.ci.centos.org> References: <1734496983.2201.1564879254644.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1699701078.2300.1564965662860.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.06 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 05 August 2019 01:40:18 +0100 (0:00:00.336) 0:03:02.621 ********* TASK [container-engine/docker : check length of search domains] **************** Monday 05 August 2019 01:40:19 +0100 (0:00:00.310) 0:03:02.932 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Monday 05 August 2019 01:40:19 +0100 (0:00:00.344) 0:03:03.276 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 05 August 2019 01:40:19 +0100 (0:00:00.293) 0:03:03.570 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 05 August 2019 01:40:20 +0100 (0:00:00.601) 0:03:04.172 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 05 August 2019 01:40:21 +0100 (0:00:01.401) 0:03:05.573 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 05 August 2019 01:40:21 +0100 (0:00:00.269) 0:03:05.843 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 05 August 2019 01:40:22 +0100 (0:00:00.270) 0:03:06.113 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 05 August 2019 01:40:22 +0100 (0:00:00.309) 0:03:06.423 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 05 August 2019 01:40:22 +0100 (0:00:00.317) 0:03:06.741 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 05 August 2019 01:40:23 +0100 (0:00:00.288) 0:03:07.029 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 05 August 2019 01:40:23 +0100 (0:00:00.310) 0:03:07.340 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 05 August 2019 01:40:23 +0100 (0:00:00.295) 0:03:07.636 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 05 August 2019 01:40:24 +0100 (0:00:00.358) 0:03:07.995 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 05 August 2019 01:40:24 +0100 (0:00:00.363) 0:03:08.358 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 05 August 2019 01:40:24 +0100 (0:00:00.349) 0:03:08.708 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 05 August 2019 01:40:25 +0100 (0:00:00.310) 0:03:09.019 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 05 August 2019 01:40:25 +0100 (0:00:00.283) 0:03:09.302 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 05 August 2019 01:40:25 +0100 (0:00:00.299) 0:03:09.602 ********* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 05 August 2019 01:40:27 +0100 (0:00:02.094) 0:03:11.696 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 05 August 2019 01:40:28 +0100 (0:00:01.077) 0:03:12.774 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 05 August 2019 01:40:29 +0100 (0:00:00.348) 0:03:13.123 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 05 August 2019 01:40:30 +0100 (0:00:01.042) 0:03:14.166 ********* TASK [container-engine/docker : get systemd version] *************************** Monday 05 August 2019 01:40:30 +0100 (0:00:00.322) 0:03:14.488 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 05 August 2019 01:40:30 +0100 (0:00:00.311) 0:03:14.800 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 05 August 2019 01:40:31 +0100 (0:00:00.305) 0:03:15.106 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 05 August 2019 01:40:33 +0100 (0:00:02.344) 0:03:17.450 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 05 August 2019 01:40:35 +0100 (0:00:02.320) 0:03:19.771 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 05 August 2019 01:40:36 +0100 (0:00:00.385) 0:03:20.156 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 05 August 2019 01:40:36 +0100 (0:00:00.284) 0:03:20.440 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 05 August 2019 01:40:38 +0100 (0:00:01.878) 0:03:22.319 ********* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 05 August 2019 01:40:39 +0100 (0:00:01.081) 0:03:23.401 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 05 August 2019 01:40:39 +0100 (0:00:00.337) 0:03:23.738 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 05 August 2019 01:40:44 +0100 (0:00:04.123) 0:03:27.861 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 05 August 2019 01:40:54 +0100 (0:00:10.217) 0:03:38.079 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 05 August 2019 01:40:55 +0100 (0:00:01.235) 0:03:39.315 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 05 August 2019 01:40:56 +0100 (0:00:01.445) 0:03:40.761 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 05 August 2019 01:40:57 +0100 (0:00:00.529) 0:03:41.290 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 05 August 2019 01:40:58 +0100 (0:00:01.068) 0:03:42.359 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 05 August 2019 01:40:59 +0100 (0:00:00.947) 0:03:43.306 ********* TASK [download : Download items] *********************************************** Monday 05 August 2019 01:40:59 +0100 (0:00:00.109) 0:03:43.416 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 05 August 2019 01:41:02 +0100 (0:00:02.905) 0:03:46.321 ********* =============================================================================== Install packages ------------------------------------------------------- 35.24s Wait for host to be available ------------------------------------------ 21.42s gather facts from all instances ---------------------------------------- 17.74s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.27s container-engine/docker : Docker | reload docker ------------------------ 4.12s kubernetes/preinstall : Create kubernetes directories ------------------- 4.04s download : Download items ----------------------------------------------- 2.91s Load required kernel modules -------------------------------------------- 2.73s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.71s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.65s kubernetes/preinstall : Create cni directories -------------------------- 2.55s Extend root VG ---------------------------------------------------------- 2.49s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.37s container-engine/docker : Write docker options systemd drop-in ---------- 2.34s container-engine/docker : Write docker dns systemd drop-in -------------- 2.32s download : Sync container ----------------------------------------------- 2.15s Gathering Facts --------------------------------------------------------- 2.13s container-engine/docker : ensure service is started if docker packages are already present --- 2.09s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.09s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 5 01:15:07 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 5 Aug 2019 01:15:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #275 In-Reply-To: <1658276098.2206.1564881844510.JavaMail.jenkins@jenkins.ci.centos.org> References: <1658276098.2206.1564881844510.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1180993149.2307.1564967707976.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.23 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 6 00:16:10 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 6 Aug 2019 00:16:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #447 In-Reply-To: <15997938.2298.1564964211918.JavaMail.jenkins@jenkins.ci.centos.org> References: <15997938.2298.1564964211918.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1352656229.2430.1565050570789.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.67 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1522 0 --:--:-- --:--:-- --:--:-- 1527 100 8513k 100 8513k 0 0 11.0M 0 --:--:-- --:--:-- --:--:-- 11.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2155 0 --:--:-- --:--:-- --:--:-- 2154 64 38.3M 64 24.8M 0 0 23.5M 0 0:00:01 0:00:01 --:--:-- 23.5M100 38.3M 100 38.3M 0 0 28.3M 0 0:00:01 0:00:01 --:--:-- 45.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 591 0 --:--:-- --:--:-- --:--:-- 590 0 0 0 620 0 0 1805 0 --:--:-- --:--:-- --:--:-- 1805 100 10.7M 100 10.7M 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 14.9M ~/nightlyrpmqmffu2/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmqmffu2/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmqmffu2/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmqmffu2 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmqmffu2/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmqmffu2/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 0aa53e015abb40e19c6670b82138aff9 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.0aco5gbg:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8070605412309056880.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2430a54c +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 162 | n35.crusty | 172.19.2.35 | crusty | 3866 | Deployed | 2430a54c | None | None | 7 | x86_64 | 1 | 2340 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 6 00:41:03 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 6 Aug 2019 00:41:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #251 In-Reply-To: <1699701078.2300.1564965662860.JavaMail.jenkins@jenkins.ci.centos.org> References: <1699701078.2300.1564965662860.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1593021371.2435.1565052063656.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.13 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 06 August 2019 01:40:19 +0100 (0:00:00.303) 0:03:01.335 ******** TASK [container-engine/docker : check length of search domains] **************** Tuesday 06 August 2019 01:40:20 +0100 (0:00:00.363) 0:03:01.699 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 06 August 2019 01:40:20 +0100 (0:00:00.323) 0:03:02.023 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 06 August 2019 01:40:20 +0100 (0:00:00.311) 0:03:02.335 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 06 August 2019 01:40:21 +0100 (0:00:00.632) 0:03:02.968 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 06 August 2019 01:40:22 +0100 (0:00:01.326) 0:03:04.294 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 06 August 2019 01:40:23 +0100 (0:00:00.272) 0:03:04.566 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 06 August 2019 01:40:23 +0100 (0:00:00.349) 0:03:04.916 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 06 August 2019 01:40:23 +0100 (0:00:00.306) 0:03:05.223 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 06 August 2019 01:40:24 +0100 (0:00:00.311) 0:03:05.534 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 06 August 2019 01:40:24 +0100 (0:00:00.277) 0:03:05.812 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 06 August 2019 01:40:24 +0100 (0:00:00.307) 0:03:06.119 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 06 August 2019 01:40:24 +0100 (0:00:00.285) 0:03:06.405 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 06 August 2019 01:40:25 +0100 (0:00:00.295) 0:03:06.700 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 06 August 2019 01:40:25 +0100 (0:00:00.351) 0:03:07.052 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 06 August 2019 01:40:25 +0100 (0:00:00.390) 0:03:07.443 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 06 August 2019 01:40:26 +0100 (0:00:00.287) 0:03:07.731 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 06 August 2019 01:40:26 +0100 (0:00:00.280) 0:03:08.011 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 06 August 2019 01:40:26 +0100 (0:00:00.290) 0:03:08.302 ******** ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 06 August 2019 01:40:28 +0100 (0:00:01.924) 0:03:10.226 ******** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 06 August 2019 01:40:29 +0100 (0:00:01.143) 0:03:11.370 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 06 August 2019 01:40:30 +0100 (0:00:00.315) 0:03:11.685 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 06 August 2019 01:40:31 +0100 (0:00:01.131) 0:03:12.817 ******** TASK [container-engine/docker : get systemd version] *************************** Tuesday 06 August 2019 01:40:31 +0100 (0:00:00.321) 0:03:13.138 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 06 August 2019 01:40:31 +0100 (0:00:00.367) 0:03:13.506 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 06 August 2019 01:40:32 +0100 (0:00:00.347) 0:03:13.854 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 06 August 2019 01:40:34 +0100 (0:00:02.217) 0:03:16.072 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 06 August 2019 01:40:36 +0100 (0:00:02.199) 0:03:18.272 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 06 August 2019 01:40:37 +0100 (0:00:00.345) 0:03:18.617 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 06 August 2019 01:40:37 +0100 (0:00:00.235) 0:03:18.853 ******** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 06 August 2019 01:40:39 +0100 (0:00:01.950) 0:03:20.804 ******** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 06 August 2019 01:40:40 +0100 (0:00:01.091) 0:03:21.896 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 06 August 2019 01:40:40 +0100 (0:00:00.309) 0:03:22.205 ******** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 06 August 2019 01:40:44 +0100 (0:00:04.070) 0:03:26.275 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 06 August 2019 01:40:55 +0100 (0:00:10.263) 0:03:36.539 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 06 August 2019 01:40:56 +0100 (0:00:01.193) 0:03:37.733 ******** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 06 August 2019 01:40:57 +0100 (0:00:01.408) 0:03:39.141 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 06 August 2019 01:40:58 +0100 (0:00:00.526) 0:03:39.667 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 06 August 2019 01:40:59 +0100 (0:00:01.148) 0:03:40.816 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 06 August 2019 01:41:00 +0100 (0:00:01.065) 0:03:41.882 ******** TASK [download : Download items] *********************************************** Tuesday 06 August 2019 01:41:00 +0100 (0:00:00.135) 0:03:42.017 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 06 August 2019 01:41:03 +0100 (0:00:02.736) 0:03:44.754 ******** =============================================================================== Install packages ------------------------------------------------------- 34.86s Wait for host to be available ------------------------------------------ 21.54s gather facts from all instances ---------------------------------------- 17.13s container-engine/docker : Docker | pause while Docker restarts --------- 10.26s Persist loaded modules -------------------------------------------------- 6.06s kubernetes/preinstall : Create kubernetes directories ------------------- 4.23s container-engine/docker : Docker | reload docker ------------------------ 4.07s download : Download items ----------------------------------------------- 2.74s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.69s Load required kernel modules -------------------------------------------- 2.67s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s Extend root VG ---------------------------------------------------------- 2.52s kubernetes/preinstall : Create cni directories -------------------------- 2.47s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.38s Gathering Facts --------------------------------------------------------- 2.24s container-engine/docker : Write docker options systemd drop-in ---------- 2.22s container-engine/docker : Write docker dns systemd drop-in -------------- 2.20s download : Download items ----------------------------------------------- 2.16s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.10s download : Sync container ----------------------------------------------- 2.07s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 6 01:24:00 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 6 Aug 2019 01:24:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #276 In-Reply-To: <1180993149.2307.1564967707976.JavaMail.jenkins@jenkins.ci.centos.org> References: <1180993149.2307.1564967707976.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <171350722.2442.1565054640098.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 7 00:16:49 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 7 Aug 2019 00:16:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #448 In-Reply-To: <1352656229.2430.1565050570789.JavaMail.jenkins@jenkins.ci.centos.org> References: <1352656229.2430.1565050570789.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1272528612.2564.1565137009296.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.99 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1922 0 --:--:-- --:--:-- --:--:-- 1926 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 16.8M 0 --:--:-- --:--:-- --:--:-- 76.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2332 0 --:--:-- --:--:-- --:--:-- 2339 99 38.3M 99 38.2M 0 0 43.1M 0 --:--:-- --:--:-- --:--:-- 43.1M100 38.3M 100 38.3M 0 0 43.2M 0 --:--:-- --:--:-- --:--:-- 86.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 560 0 --:--:-- --:--:-- --:--:-- 562 0 0 0 620 0 0 1752 0 --:--:-- --:--:-- --:--:-- 1752 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.4M 0 --:--:-- --:--:-- --:--:-- 54.7M ~/nightlyrpmcTaSpx/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmcTaSpx/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmcTaSpx/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmcTaSpx ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmcTaSpx/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmcTaSpx/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 81147de29901417385416e89b79e2ae8 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.d2_mt_0n:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5395309176165303815.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 790c6e6f +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 136 | n9.crusty | 172.19.2.9 | crusty | 3870 | Deployed | 790c6e6f | None | None | 7 | x86_64 | 1 | 2080 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 7 00:40:52 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 7 Aug 2019 00:40:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #252 In-Reply-To: <1593021371.2435.1565052063656.JavaMail.jenkins@jenkins.ci.centos.org> References: <1593021371.2435.1565052063656.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1549022476.2566.1565138452269.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.99 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 07 August 2019 01:40:08 +0100 (0:00:00.300) 0:03:00.018 ****** TASK [container-engine/docker : check length of search domains] **************** Wednesday 07 August 2019 01:40:09 +0100 (0:00:00.301) 0:03:00.320 ****** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 07 August 2019 01:40:09 +0100 (0:00:00.313) 0:03:00.633 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 07 August 2019 01:40:09 +0100 (0:00:00.289) 0:03:00.922 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 07 August 2019 01:40:10 +0100 (0:00:00.657) 0:03:01.580 ****** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 07 August 2019 01:40:11 +0100 (0:00:01.326) 0:03:02.906 ****** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 07 August 2019 01:40:12 +0100 (0:00:00.258) 0:03:03.165 ****** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 07 August 2019 01:40:12 +0100 (0:00:00.267) 0:03:03.432 ****** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 07 August 2019 01:40:12 +0100 (0:00:00.307) 0:03:03.740 ****** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 07 August 2019 01:40:13 +0100 (0:00:00.308) 0:03:04.049 ****** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 07 August 2019 01:40:13 +0100 (0:00:00.287) 0:03:04.337 ****** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 07 August 2019 01:40:13 +0100 (0:00:00.275) 0:03:04.613 ****** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 07 August 2019 01:40:13 +0100 (0:00:00.283) 0:03:04.896 ****** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 07 August 2019 01:40:14 +0100 (0:00:00.411) 0:03:05.308 ****** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 07 August 2019 01:40:14 +0100 (0:00:00.372) 0:03:05.681 ****** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 07 August 2019 01:40:14 +0100 (0:00:00.327) 0:03:06.009 ****** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 07 August 2019 01:40:15 +0100 (0:00:00.277) 0:03:06.286 ****** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 07 August 2019 01:40:15 +0100 (0:00:00.295) 0:03:06.582 ****** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 07 August 2019 01:40:15 +0100 (0:00:00.292) 0:03:06.874 ****** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 07 August 2019 01:40:17 +0100 (0:00:01.922) 0:03:08.796 ****** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 07 August 2019 01:40:18 +0100 (0:00:01.128) 0:03:09.925 ****** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 07 August 2019 01:40:19 +0100 (0:00:00.321) 0:03:10.247 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 07 August 2019 01:40:20 +0100 (0:00:01.015) 0:03:11.263 ****** TASK [container-engine/docker : get systemd version] *************************** Wednesday 07 August 2019 01:40:20 +0100 (0:00:00.351) 0:03:11.614 ****** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 07 August 2019 01:40:20 +0100 (0:00:00.333) 0:03:11.948 ****** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 07 August 2019 01:40:21 +0100 (0:00:00.480) 0:03:12.428 ****** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 07 August 2019 01:40:23 +0100 (0:00:02.086) 0:03:14.515 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 07 August 2019 01:40:25 +0100 (0:00:02.064) 0:03:16.579 ****** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 07 August 2019 01:40:25 +0100 (0:00:00.339) 0:03:16.919 ****** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 07 August 2019 01:40:26 +0100 (0:00:00.237) 0:03:17.157 ****** changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 07 August 2019 01:40:28 +0100 (0:00:02.026) 0:03:19.183 ****** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 07 August 2019 01:40:29 +0100 (0:00:01.086) 0:03:20.270 ****** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 07 August 2019 01:40:29 +0100 (0:00:00.275) 0:03:20.546 ****** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 07 August 2019 01:40:33 +0100 (0:00:04.017) 0:03:24.563 ****** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 07 August 2019 01:40:43 +0100 (0:00:10.192) 0:03:34.756 ****** changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 07 August 2019 01:40:45 +0100 (0:00:01.312) 0:03:36.069 ****** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 07 August 2019 01:40:46 +0100 (0:00:01.240) 0:03:37.310 ****** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 07 August 2019 01:40:46 +0100 (0:00:00.536) 0:03:37.846 ****** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 07 August 2019 01:40:47 +0100 (0:00:01.085) 0:03:38.932 ****** changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 07 August 2019 01:40:49 +0100 (0:00:01.119) 0:03:40.051 ****** TASK [download : Download items] *********************************************** Wednesday 07 August 2019 01:40:49 +0100 (0:00:00.168) 0:03:40.220 ****** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 07 August 2019 01:40:51 +0100 (0:00:02.691) 0:03:42.912 ****** =============================================================================== Install packages ------------------------------------------------------- 35.44s Wait for host to be available ------------------------------------------ 21.69s gather facts from all instances ---------------------------------------- 16.42s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.13s container-engine/docker : Docker | reload docker ------------------------ 4.02s kubernetes/preinstall : Create kubernetes directories ------------------- 3.88s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.70s download : Download items ----------------------------------------------- 2.69s Load required kernel modules -------------------------------------------- 2.67s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.56s kubernetes/preinstall : Create cni directories -------------------------- 2.47s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.44s Extend root VG ---------------------------------------------------------- 2.40s Gathering Facts --------------------------------------------------------- 2.14s download : Sync container ----------------------------------------------- 2.11s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.11s container-engine/docker : Write docker options systemd drop-in ---------- 2.09s container-engine/docker : Write docker dns systemd drop-in -------------- 2.06s container-engine/docker : restart docker -------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 7 01:24:39 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 7 Aug 2019 01:24:39 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #277 In-Reply-To: <171350722.2442.1565054640098.JavaMail.jenkins@jenkins.ci.centos.org> References: <171350722.2442.1565054640098.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <154189532.2581.1565141079963.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.61 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 8 00:16:41 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 8 Aug 2019 00:16:41 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #449 In-Reply-To: <1272528612.2564.1565137009296.JavaMail.jenkins@jenkins.ci.centos.org> References: <1272528612.2564.1565137009296.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <246564148.2664.1565223401385.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1971 0 --:--:-- --:--:-- --:--:-- 1977 100 8513k 100 8513k 0 0 10.0M 0 --:--:-- --:--:-- --:--:-- 10.0M100 8513k 100 8513k 0 0 10.0M 0 --:--:-- --:--:-- --:--:-- 0 Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1786 0 --:--:-- --:--:-- --:--:-- 1791 78 38.3M 78 29.9M 0 0 34.9M 0 0:00:01 --:--:-- 0:00:01 34.9M100 38.3M 100 38.3M 0 0 36.4M 0 0:00:01 0:00:01 --:--:-- 42.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 567 0 --:--:-- --:--:-- --:--:-- 568 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1618 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 12.4M 0 --:--:-- --:--:-- --:--:-- 12.4M ~/nightlyrpmJubAsg/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmJubAsg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmJubAsg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmJubAsg ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmJubAsg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmJubAsg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 076a904e7dcf409d912fffa334a27d8b -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.y4744zxj:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2823065491236159893.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 40e05de2 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 141 | n14.crusty | 172.19.2.14 | crusty | 3874 | Deployed | 40e05de2 | None | None | 7 | x86_64 | 1 | 2130 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 8 00:42:27 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 8 Aug 2019 00:42:27 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #253 In-Reply-To: <1549022476.2566.1565138452269.JavaMail.jenkins@jenkins.ci.centos.org> References: <1549022476.2566.1565138452269.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <903730993.2665.1565224947240.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.98 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 08 August 2019 01:41:43 +0100 (0:00:00.306) 0:03:01.624 ******* TASK [container-engine/docker : check length of search domains] **************** Thursday 08 August 2019 01:41:44 +0100 (0:00:00.298) 0:03:01.922 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 08 August 2019 01:41:44 +0100 (0:00:00.298) 0:03:02.221 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 08 August 2019 01:41:44 +0100 (0:00:00.296) 0:03:02.518 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 08 August 2019 01:41:45 +0100 (0:00:00.670) 0:03:03.189 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 08 August 2019 01:41:46 +0100 (0:00:01.299) 0:03:04.488 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 08 August 2019 01:41:46 +0100 (0:00:00.263) 0:03:04.752 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 08 August 2019 01:41:47 +0100 (0:00:00.265) 0:03:05.017 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 08 August 2019 01:41:47 +0100 (0:00:00.311) 0:03:05.328 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 08 August 2019 01:41:47 +0100 (0:00:00.308) 0:03:05.636 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 08 August 2019 01:41:48 +0100 (0:00:00.280) 0:03:05.917 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 08 August 2019 01:41:48 +0100 (0:00:00.276) 0:03:06.193 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 08 August 2019 01:41:48 +0100 (0:00:00.349) 0:03:06.543 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 08 August 2019 01:41:48 +0100 (0:00:00.314) 0:03:06.858 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 08 August 2019 01:41:49 +0100 (0:00:00.368) 0:03:07.226 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 08 August 2019 01:41:49 +0100 (0:00:00.341) 0:03:07.567 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 08 August 2019 01:41:49 +0100 (0:00:00.336) 0:03:07.904 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 08 August 2019 01:41:50 +0100 (0:00:00.314) 0:03:08.218 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 08 August 2019 01:41:50 +0100 (0:00:00.294) 0:03:08.513 ******* ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 08 August 2019 01:41:52 +0100 (0:00:01.948) 0:03:10.462 ******* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 08 August 2019 01:41:53 +0100 (0:00:01.108) 0:03:11.570 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 08 August 2019 01:41:53 +0100 (0:00:00.341) 0:03:11.912 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 08 August 2019 01:41:55 +0100 (0:00:01.047) 0:03:12.959 ******* TASK [container-engine/docker : get systemd version] *************************** Thursday 08 August 2019 01:41:55 +0100 (0:00:00.353) 0:03:13.314 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 08 August 2019 01:41:55 +0100 (0:00:00.311) 0:03:13.625 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 08 August 2019 01:41:56 +0100 (0:00:00.351) 0:03:13.976 ******* changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 08 August 2019 01:41:58 +0100 (0:00:02.271) 0:03:16.247 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 08 August 2019 01:42:00 +0100 (0:00:02.136) 0:03:18.383 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 08 August 2019 01:42:00 +0100 (0:00:00.332) 0:03:18.716 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 08 August 2019 01:42:01 +0100 (0:00:00.237) 0:03:18.954 ******* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 08 August 2019 01:42:02 +0100 (0:00:01.880) 0:03:20.834 ******* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 08 August 2019 01:42:04 +0100 (0:00:01.135) 0:03:21.970 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 08 August 2019 01:42:04 +0100 (0:00:00.362) 0:03:22.333 ******* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 08 August 2019 01:42:08 +0100 (0:00:04.132) 0:03:26.465 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 08 August 2019 01:42:18 +0100 (0:00:10.213) 0:03:36.679 ******* changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 08 August 2019 01:42:19 +0100 (0:00:01.207) 0:03:37.887 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 08 August 2019 01:42:21 +0100 (0:00:01.347) 0:03:39.235 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 08 August 2019 01:42:21 +0100 (0:00:00.529) 0:03:39.764 ******* ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 08 August 2019 01:42:22 +0100 (0:00:01.080) 0:03:40.845 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 08 August 2019 01:42:23 +0100 (0:00:01.021) 0:03:41.866 ******* TASK [download : Download items] *********************************************** Thursday 08 August 2019 01:42:24 +0100 (0:00:00.120) 0:03:41.987 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 08 August 2019 01:42:26 +0100 (0:00:02.754) 0:03:44.741 ******* =============================================================================== Install packages ------------------------------------------------------- 35.32s Wait for host to be available ------------------------------------------ 21.49s gather facts from all instances ---------------------------------------- 17.61s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.13s container-engine/docker : Docker | reload docker ------------------------ 4.13s kubernetes/preinstall : Create kubernetes directories ------------------- 4.01s download : Download items ----------------------------------------------- 2.75s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.70s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.56s Load required kernel modules -------------------------------------------- 2.54s kubernetes/preinstall : Create cni directories -------------------------- 2.45s Extend root VG ---------------------------------------------------------- 2.41s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.30s container-engine/docker : Write docker options systemd drop-in ---------- 2.27s download : Download items ----------------------------------------------- 2.19s Gathering Facts --------------------------------------------------------- 2.17s container-engine/docker : Write docker dns systemd drop-in -------------- 2.14s download : Sync container ----------------------------------------------- 2.14s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.12s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 8 01:23:58 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 8 Aug 2019 01:23:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #278 In-Reply-To: <154189532.2581.1565141079963.JavaMail.jenkins@jenkins.ci.centos.org> References: <154189532.2581.1565141079963.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1375528561.2669.1565227438596.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.27 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 9 00:16:40 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 9 Aug 2019 00:16:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #450 In-Reply-To: <246564148.2664.1565223401385.JavaMail.jenkins@jenkins.ci.centos.org> References: <246564148.2664.1565223401385.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1639533717.2761.1565309800896.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Adding task to setup-glusto.yml playbook to install crefi. ------------------------------------------ [...truncated 39.01 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1895 0 --:--:-- --:--:-- --:--:-- 1896 100 8513k 100 8513k 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2015 0 --:--:-- --:--:-- --:--:-- 2022 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 46.2M 0 --:--:-- --:--:-- --:--:-- 101M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 644 0 --:--:-- --:--:-- --:--:-- 642 0 0 0 620 0 0 734 0 --:--:-- --:--:-- --:--:-- 734 28 10.7M 28 3127k 0 0 2777k 0 0:00:03 0:00:01 0:00:02 2777k100 10.7M 100 10.7M 0 0 7663k 0 0:00:01 0:00:01 --:--:-- 24.9M ~/nightlyrpmERHcHi/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmERHcHi/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmERHcHi/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmERHcHi ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmERHcHi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmERHcHi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fb3f8856c8fa4096b21bd5140bf973bf -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.rxavy1v3:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7583508464854716369.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2ce3edcc +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 159 | n32.crusty | 172.19.2.32 | crusty | 3879 | Deployed | 2ce3edcc | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 9 00:40:58 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 9 Aug 2019 00:40:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #254 In-Reply-To: <903730993.2665.1565224947240.JavaMail.jenkins@jenkins.ci.centos.org> References: <903730993.2665.1565224947240.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1676918871.2762.1565311259035.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Adding task to setup-glusto.yml playbook to install crefi. ------------------------------------------ [...truncated 289.02 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 09 August 2019 01:40:15 +0100 (0:00:00.297) 0:03:03.474 ********* TASK [container-engine/docker : check length of search domains] **************** Friday 09 August 2019 01:40:16 +0100 (0:00:00.300) 0:03:03.774 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Friday 09 August 2019 01:40:16 +0100 (0:00:00.296) 0:03:04.071 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 09 August 2019 01:40:16 +0100 (0:00:00.296) 0:03:04.368 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 09 August 2019 01:40:17 +0100 (0:00:00.564) 0:03:04.933 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 09 August 2019 01:40:18 +0100 (0:00:01.286) 0:03:06.219 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 09 August 2019 01:40:18 +0100 (0:00:00.261) 0:03:06.481 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 09 August 2019 01:40:19 +0100 (0:00:00.264) 0:03:06.745 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 09 August 2019 01:40:19 +0100 (0:00:00.310) 0:03:07.056 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 09 August 2019 01:40:19 +0100 (0:00:00.308) 0:03:07.364 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 09 August 2019 01:40:20 +0100 (0:00:00.274) 0:03:07.639 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 09 August 2019 01:40:20 +0100 (0:00:00.298) 0:03:07.938 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 09 August 2019 01:40:20 +0100 (0:00:00.297) 0:03:08.235 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 09 August 2019 01:40:20 +0100 (0:00:00.285) 0:03:08.521 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 09 August 2019 01:40:21 +0100 (0:00:00.359) 0:03:08.881 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 09 August 2019 01:40:21 +0100 (0:00:00.344) 0:03:09.225 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 09 August 2019 01:40:21 +0100 (0:00:00.279) 0:03:09.505 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 09 August 2019 01:40:22 +0100 (0:00:00.275) 0:03:09.780 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 09 August 2019 01:40:22 +0100 (0:00:00.285) 0:03:10.066 ********* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 09 August 2019 01:40:24 +0100 (0:00:02.136) 0:03:12.202 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 09 August 2019 01:40:25 +0100 (0:00:01.049) 0:03:13.251 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 09 August 2019 01:40:25 +0100 (0:00:00.286) 0:03:13.538 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 09 August 2019 01:40:27 +0100 (0:00:01.180) 0:03:14.718 ********* TASK [container-engine/docker : get systemd version] *************************** Friday 09 August 2019 01:40:27 +0100 (0:00:00.360) 0:03:15.079 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 09 August 2019 01:40:27 +0100 (0:00:00.349) 0:03:15.428 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 09 August 2019 01:40:28 +0100 (0:00:00.310) 0:03:15.739 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 09 August 2019 01:40:30 +0100 (0:00:02.252) 0:03:17.992 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 09 August 2019 01:40:32 +0100 (0:00:02.228) 0:03:20.220 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 09 August 2019 01:40:32 +0100 (0:00:00.303) 0:03:20.524 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 09 August 2019 01:40:33 +0100 (0:00:00.238) 0:03:20.762 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 09 August 2019 01:40:35 +0100 (0:00:01.908) 0:03:22.671 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 09 August 2019 01:40:36 +0100 (0:00:01.139) 0:03:23.810 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 09 August 2019 01:40:36 +0100 (0:00:00.290) 0:03:24.101 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 09 August 2019 01:40:40 +0100 (0:00:03.990) 0:03:28.091 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 09 August 2019 01:40:50 +0100 (0:00:10.210) 0:03:38.301 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 09 August 2019 01:40:51 +0100 (0:00:01.207) 0:03:39.508 ********* ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 09 August 2019 01:40:53 +0100 (0:00:01.262) 0:03:40.771 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 09 August 2019 01:40:53 +0100 (0:00:00.529) 0:03:41.301 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 09 August 2019 01:40:54 +0100 (0:00:01.139) 0:03:42.440 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 09 August 2019 01:40:55 +0100 (0:00:00.957) 0:03:43.397 ********* TASK [download : Download items] *********************************************** Friday 09 August 2019 01:40:55 +0100 (0:00:00.142) 0:03:43.540 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 09 August 2019 01:40:58 +0100 (0:00:02.784) 0:03:46.324 ********* =============================================================================== Install packages ------------------------------------------------------- 38.13s Wait for host to be available ------------------------------------------ 21.50s gather facts from all instances ---------------------------------------- 16.85s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.16s kubernetes/preinstall : Create kubernetes directories ------------------- 4.09s container-engine/docker : Docker | reload docker ------------------------ 3.99s download : Download items ----------------------------------------------- 2.78s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.68s Load required kernel modules -------------------------------------------- 2.56s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.52s Extend root VG ---------------------------------------------------------- 2.51s kubernetes/preinstall : Create cni directories -------------------------- 2.48s Gathering Facts --------------------------------------------------------- 2.26s container-engine/docker : Write docker options systemd drop-in ---------- 2.25s container-engine/docker : Write docker dns systemd drop-in -------------- 2.23s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.18s container-engine/docker : ensure service is started if docker packages are already present --- 2.14s download : Sync container ----------------------------------------------- 2.12s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 9 01:13:13 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 9 Aug 2019 01:13:13 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #279 In-Reply-To: <1375528561.2669.1565227438596.JavaMail.jenkins@jenkins.ci.centos.org> References: <1375528561.2669.1565227438596.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <967174068.2768.1565313193733.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Adding task to setup-glusto.yml playbook to install crefi. ------------------------------------------ [...truncated 57.39 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 10 00:16:41 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 10 Aug 2019 00:16:41 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #451 In-Reply-To: <1639533717.2761.1565309800896.JavaMail.jenkins@jenkins.ci.centos.org> References: <1639533717.2761.1565309800896.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1626853745.2869.1565396201396.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.01 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1947 0 --:--:-- --:--:-- --:--:-- 1957 0 8513k 0 33334 0 0 75136 0 0:01:56 --:--:-- 0:01:56 75136100 8513k 100 8513k 0 0 14.8M 0 --:--:-- --:--:-- --:--:-- 70.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2153 0 --:--:-- --:--:-- --:--:-- 2162 93 38.3M 93 35.8M 0 0 41.5M 0 --:--:-- --:--:-- --:--:-- 41.5M100 38.3M 100 38.3M 0 0 42.1M 0 --:--:-- --:--:-- --:--:-- 53.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 539 0 --:--:-- --:--:-- --:--:-- 540 0 0 0 620 0 0 1647 0 --:--:-- --:--:-- --:--:-- 1647 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 40.6M ~/nightlyrpm76wiMo/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm76wiMo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm76wiMo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm76wiMo ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm76wiMo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm76wiMo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d133575442f840349f8dc385b211309d -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.j4xgd4pm:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7060729504164793654.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 6c5a731c +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 127 | n63.pufty | 172.19.3.127 | pufty | 3883 | Deployed | 6c5a731c | None | None | 7 | x86_64 | 1 | 2620 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 10 00:40:49 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 10 Aug 2019 00:40:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #255 In-Reply-To: <1676918871.2762.1565311259035.JavaMail.jenkins@jenkins.ci.centos.org> References: <1676918871.2762.1565311259035.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2124457869.2871.1565397649521.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.92 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 10 August 2019 01:40:05 +0100 (0:00:00.305) 0:03:01.285 ******* TASK [container-engine/docker : check length of search domains] **************** Saturday 10 August 2019 01:40:05 +0100 (0:00:00.304) 0:03:01.589 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 10 August 2019 01:40:06 +0100 (0:00:00.323) 0:03:01.913 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 10 August 2019 01:40:06 +0100 (0:00:00.301) 0:03:02.215 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 10 August 2019 01:40:06 +0100 (0:00:00.584) 0:03:02.799 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 10 August 2019 01:40:08 +0100 (0:00:01.375) 0:03:04.174 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 10 August 2019 01:40:08 +0100 (0:00:00.273) 0:03:04.448 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 10 August 2019 01:40:08 +0100 (0:00:00.253) 0:03:04.702 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 10 August 2019 01:40:09 +0100 (0:00:00.322) 0:03:05.025 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 10 August 2019 01:40:09 +0100 (0:00:00.310) 0:03:05.335 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 10 August 2019 01:40:09 +0100 (0:00:00.284) 0:03:05.620 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 10 August 2019 01:40:10 +0100 (0:00:00.290) 0:03:05.910 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 10 August 2019 01:40:10 +0100 (0:00:00.304) 0:03:06.215 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 10 August 2019 01:40:10 +0100 (0:00:00.300) 0:03:06.516 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 10 August 2019 01:40:11 +0100 (0:00:00.373) 0:03:06.889 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 10 August 2019 01:40:11 +0100 (0:00:00.398) 0:03:07.288 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 10 August 2019 01:40:11 +0100 (0:00:00.280) 0:03:07.569 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 10 August 2019 01:40:11 +0100 (0:00:00.284) 0:03:07.854 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 10 August 2019 01:40:12 +0100 (0:00:00.286) 0:03:08.141 ******* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 10 August 2019 01:40:14 +0100 (0:00:02.015) 0:03:10.156 ******* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 10 August 2019 01:40:15 +0100 (0:00:01.135) 0:03:11.292 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 10 August 2019 01:40:15 +0100 (0:00:00.304) 0:03:11.596 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 10 August 2019 01:40:16 +0100 (0:00:01.017) 0:03:12.613 ******* TASK [container-engine/docker : get systemd version] *************************** Saturday 10 August 2019 01:40:17 +0100 (0:00:00.368) 0:03:12.982 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 10 August 2019 01:40:17 +0100 (0:00:00.375) 0:03:13.358 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 10 August 2019 01:40:17 +0100 (0:00:00.342) 0:03:13.701 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 10 August 2019 01:40:20 +0100 (0:00:02.321) 0:03:16.023 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 10 August 2019 01:40:22 +0100 (0:00:02.235) 0:03:18.259 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 10 August 2019 01:40:22 +0100 (0:00:00.309) 0:03:18.569 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 10 August 2019 01:40:22 +0100 (0:00:00.243) 0:03:18.812 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 10 August 2019 01:40:24 +0100 (0:00:01.899) 0:03:20.711 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 10 August 2019 01:40:26 +0100 (0:00:01.244) 0:03:21.956 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 10 August 2019 01:40:26 +0100 (0:00:00.353) 0:03:22.310 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 10 August 2019 01:40:30 +0100 (0:00:04.213) 0:03:26.523 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 10 August 2019 01:40:40 +0100 (0:00:10.275) 0:03:36.798 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 10 August 2019 01:40:42 +0100 (0:00:01.282) 0:03:38.081 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 10 August 2019 01:40:43 +0100 (0:00:01.231) 0:03:39.313 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 10 August 2019 01:40:43 +0100 (0:00:00.527) 0:03:39.841 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 10 August 2019 01:40:45 +0100 (0:00:01.091) 0:03:40.933 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 10 August 2019 01:40:46 +0100 (0:00:01.118) 0:03:42.052 ******* TASK [download : Download items] *********************************************** Saturday 10 August 2019 01:40:46 +0100 (0:00:00.140) 0:03:42.192 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 10 August 2019 01:40:49 +0100 (0:00:02.708) 0:03:44.901 ******* =============================================================================== Install packages ------------------------------------------------------- 35.45s Wait for host to be available ------------------------------------------ 21.68s gather facts from all instances ---------------------------------------- 16.61s container-engine/docker : Docker | pause while Docker restarts --------- 10.28s Persist loaded modules -------------------------------------------------- 6.10s container-engine/docker : Docker | reload docker ------------------------ 4.21s kubernetes/preinstall : Create kubernetes directories ------------------- 4.06s download : Download items ----------------------------------------------- 2.71s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.66s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.62s Load required kernel modules -------------------------------------------- 2.56s kubernetes/preinstall : Create cni directories -------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.43s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.34s container-engine/docker : Write docker options systemd drop-in ---------- 2.32s container-engine/docker : Write docker dns systemd drop-in -------------- 2.24s Gathering Facts --------------------------------------------------------- 2.22s download : Sync container ----------------------------------------------- 2.11s kubernetes/preinstall : Set selinux policy ------------------------------ 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 10 01:17:10 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 10 Aug 2019 01:17:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #280 In-Reply-To: <967174068.2768.1565313193733.JavaMail.jenkins@jenkins.ci.centos.org> References: <967174068.2768.1565313193733.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1614870630.2878.1565399830745.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.15 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 11 00:11:06 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:11:06 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #452 In-Reply-To: <1626853745.2869.1565396201396.JavaMail.jenkins@jenkins.ci.centos.org> References: <1626853745.2869.1565396201396.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <431428834.2928.1565482266216.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 41.36 KB...] ---> Package kernel-headers.x86_64 0:3.10.0-957.27.2.el7 will be installed ---> Package libmodman.x86_64 0:2.0.1-8.el7 will be installed ---> Package mock.noarch 0:1.4.16-1.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: mock-1.4.16-1.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: mock-1.4.16-1.el7.noarch ---> Package nettle.x86_64 0:2.7.1-8.el7 will be installed ---> Package python36-chardet.noarch 0:2.3.0-6.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-chardet-2.3.0-6.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-chardet-2.3.0-6.el7.noarch ---> Package python36-distro.noarch 0:1.2.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-distro-1.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-distro-1.2.0-3.el7.noarch ---> Package python36-idna.noarch 0:2.7-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-idna-2.7-2.el7.noarch ---> Package python36-jinja2.noarch 0:2.8.1-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-jinja2-2.8.1-2.el7.noarch ---> Package python36-markupsafe.x86_64 0:0.23-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-markupsafe-0.23-3.el7.x86_64 --> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-markupsafe-0.23-3.el7.x86_64 ---> Package python36-pyroute2.noarch 0:0.4.13-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-pyroute2-0.4.13-2.el7.noarch ---> Package python36-pysocks.noarch 0:1.6.8-6.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-pysocks-1.6.8-6.el7.noarch ---> Package python36-requests.noarch 0:2.12.5-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-requests-2.12.5-3.el7.noarch ---> Package python36-rpm.x86_64 0:4.11.3-4.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-rpm-4.11.3-4.el7.x86_64 --> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-rpm-4.11.3-4.el7.x86_64 ---> Package python36-setuptools.noarch 0:39.2.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-setuptools-39.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-setuptools-39.2.0-3.el7.noarch ---> Package python36-six.noarch 0:1.11.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-six-1.11.0-3.el7.noarch ---> Package python36-urllib3.noarch 0:1.19.1-5.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-urllib3-1.19.1-5.el7.noarch ---> Package trousers.x86_64 0:0.3.14-2.el7 will be installed --> Processing Dependency: /usr/bin/python3.6 for package: mock-1.4.16-1.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-setuptools-39.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-chardet-2.3.0-6.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-distro-1.2.0-3.el7.noarch --> Finished Dependency Resolution Error: Package: python36-distro-1.2.0-3.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: mock-1.4.16-1.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-urllib3-1.19.1-5.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-distro-1.2.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-markupsafe-0.23-3.el7.x86_64 (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-jinja2-2.8.1-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-six-1.11.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-rpm-4.11.3-4.el7.x86_64 (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: mock-1.4.16-1.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-pyroute2-0.4.13-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-chardet-2.3.0-6.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-pysocks-1.6.8-6.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-idna-2.7-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-requests-2.12.5-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-rpm-4.11.3-4.el7.x86_64 (epel) Requires: libpython3.6m.so.1.0()(64bit) Error: Package: python36-setuptools-39.2.0-3.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-setuptools-39.2.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-chardet-2.3.0-6.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 You could try using --skip-broken to work around the problem Error: Package: python36-markupsafe-0.23-3.el7.x86_64 (epel) Requires: libpython3.6m.so.1.0()(64bit) You could try running: rpm -Va --nofiles --nodigest Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins935229327488566409.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done dcb46940 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 233 | n42.dusty | 172.19.2.106 | dusty | 3887 | Deployed | dcb46940 | None | None | 7 | x86_64 | 1 | 2410 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 11 00:40:51 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:40:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #256 In-Reply-To: <2124457869.2871.1565397649521.JavaMail.jenkins@jenkins.ci.centos.org> References: <2124457869.2871.1565397649521.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <874528101.2929.1565484051850.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.16 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 11 August 2019 01:40:08 +0100 (0:00:00.308) 0:02:57.638 ********* TASK [container-engine/docker : check length of search domains] **************** Sunday 11 August 2019 01:40:08 +0100 (0:00:00.290) 0:02:57.928 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 11 August 2019 01:40:09 +0100 (0:00:00.295) 0:02:58.224 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 11 August 2019 01:40:09 +0100 (0:00:00.287) 0:02:58.511 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 11 August 2019 01:40:10 +0100 (0:00:00.665) 0:02:59.177 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 11 August 2019 01:40:11 +0100 (0:00:01.307) 0:03:00.484 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 11 August 2019 01:40:11 +0100 (0:00:00.255) 0:03:00.739 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 11 August 2019 01:40:11 +0100 (0:00:00.349) 0:03:01.089 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 11 August 2019 01:40:12 +0100 (0:00:00.322) 0:03:01.412 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 11 August 2019 01:40:12 +0100 (0:00:00.351) 0:03:01.763 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 11 August 2019 01:40:12 +0100 (0:00:00.300) 0:03:02.064 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 11 August 2019 01:40:13 +0100 (0:00:00.277) 0:03:02.341 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 11 August 2019 01:40:13 +0100 (0:00:00.292) 0:03:02.634 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 11 August 2019 01:40:13 +0100 (0:00:00.284) 0:03:02.919 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 11 August 2019 01:40:14 +0100 (0:00:00.411) 0:03:03.330 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 11 August 2019 01:40:14 +0100 (0:00:00.354) 0:03:03.685 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 11 August 2019 01:40:14 +0100 (0:00:00.271) 0:03:03.957 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 11 August 2019 01:40:15 +0100 (0:00:00.274) 0:03:04.231 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 11 August 2019 01:40:15 +0100 (0:00:00.278) 0:03:04.510 ********* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 11 August 2019 01:40:17 +0100 (0:00:01.929) 0:03:06.440 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 11 August 2019 01:40:18 +0100 (0:00:01.056) 0:03:07.497 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 11 August 2019 01:40:18 +0100 (0:00:00.299) 0:03:07.796 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 11 August 2019 01:40:19 +0100 (0:00:01.047) 0:03:08.844 ********* TASK [container-engine/docker : get systemd version] *************************** Sunday 11 August 2019 01:40:20 +0100 (0:00:00.322) 0:03:09.166 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 11 August 2019 01:40:20 +0100 (0:00:00.308) 0:03:09.474 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 11 August 2019 01:40:20 +0100 (0:00:00.349) 0:03:09.824 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 11 August 2019 01:40:22 +0100 (0:00:02.280) 0:03:12.104 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 11 August 2019 01:40:25 +0100 (0:00:02.092) 0:03:14.197 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 11 August 2019 01:40:25 +0100 (0:00:00.328) 0:03:14.525 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 11 August 2019 01:40:25 +0100 (0:00:00.252) 0:03:14.778 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 11 August 2019 01:40:27 +0100 (0:00:02.023) 0:03:16.801 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 11 August 2019 01:40:28 +0100 (0:00:01.070) 0:03:17.871 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 11 August 2019 01:40:29 +0100 (0:00:00.301) 0:03:18.173 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 11 August 2019 01:40:33 +0100 (0:00:04.107) 0:03:22.281 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 11 August 2019 01:40:43 +0100 (0:00:10.214) 0:03:32.495 ********* changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 11 August 2019 01:40:44 +0100 (0:00:01.296) 0:03:33.792 ********* ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 11 August 2019 01:40:45 +0100 (0:00:01.250) 0:03:35.042 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 11 August 2019 01:40:46 +0100 (0:00:00.534) 0:03:35.576 ********* ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 11 August 2019 01:40:47 +0100 (0:00:01.077) 0:03:36.654 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 11 August 2019 01:40:48 +0100 (0:00:00.968) 0:03:37.623 ********* TASK [download : Download items] *********************************************** Sunday 11 August 2019 01:40:48 +0100 (0:00:00.127) 0:03:37.750 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 11 August 2019 01:40:51 +0100 (0:00:02.742) 0:03:40.492 ********* =============================================================================== Install packages ------------------------------------------------------- 34.58s Wait for host to be available ------------------------------------------ 21.36s gather facts from all instances ---------------------------------------- 15.68s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.20s container-engine/docker : Docker | reload docker ------------------------ 4.11s kubernetes/preinstall : Create kubernetes directories ------------------- 3.94s download : Download items ----------------------------------------------- 2.74s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.64s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.53s Load required kernel modules -------------------------------------------- 2.52s Extend root VG ---------------------------------------------------------- 2.48s kubernetes/preinstall : Create cni directories -------------------------- 2.39s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.35s Gathering Facts --------------------------------------------------------- 2.33s container-engine/docker : Write docker options systemd drop-in ---------- 2.28s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.14s container-engine/docker : Write docker dns systemd drop-in -------------- 2.09s download : Download items ----------------------------------------------- 2.03s container-engine/docker : restart docker -------------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 11 00:51:29 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:51:29 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10827 - Failure! (master on CentOS-7/x86_64) Message-ID: <575217307.2935.1565484689179.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10827 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10827/ to view the results. From ci at centos.org Sun Aug 11 00:53:05 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:53:05 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10830 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <802990246.2937.1565484785995.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10830 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10830/ to view the results. From ci at centos.org Sun Aug 11 00:55:56 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:55:56 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10832 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <1630786094.2939.1565484957124.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10832 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10832/ to view the results. From ci at centos.org Sun Aug 11 00:58:36 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 00:58:36 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10833 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1630786094.2939.1565484957124.JavaMail.jenkins@jenkins.ci.centos.org> References: <1630786094.2939.1565484957124.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <297052547.2941.1565485116276.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10833 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10833/ to view the results. From ci at centos.org Sun Aug 11 01:13:14 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 11 Aug 2019 01:13:14 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #281 In-Reply-To: <1614870630.2878.1565399830745.JavaMail.jenkins@jenkins.ci.centos.org> References: <1614870630.2878.1565399830745.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1211552334.2946.1565485994296.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.24 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 12 00:10:44 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:10:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #453 In-Reply-To: <431428834.2928.1565482266216.JavaMail.jenkins@jenkins.ci.centos.org> References: <431428834.2928.1565482266216.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2074973828.3001.1565568644376.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 41.24 KB...] ---> Package kernel-headers.x86_64 0:3.10.0-957.27.2.el7 will be installed ---> Package libmodman.x86_64 0:2.0.1-8.el7 will be installed ---> Package mock.noarch 0:1.4.16-1.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: mock-1.4.16-1.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: mock-1.4.16-1.el7.noarch ---> Package nettle.x86_64 0:2.7.1-8.el7 will be installed ---> Package python36-chardet.noarch 0:2.3.0-6.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-chardet-2.3.0-6.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-chardet-2.3.0-6.el7.noarch ---> Package python36-distro.noarch 0:1.2.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-distro-1.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-distro-1.2.0-3.el7.noarch ---> Package python36-idna.noarch 0:2.7-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-idna-2.7-2.el7.noarch ---> Package python36-jinja2.noarch 0:2.8.1-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-jinja2-2.8.1-2.el7.noarch ---> Package python36-markupsafe.x86_64 0:0.23-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-markupsafe-0.23-3.el7.x86_64 --> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-markupsafe-0.23-3.el7.x86_64 ---> Package python36-pyroute2.noarch 0:0.4.13-2.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-pyroute2-0.4.13-2.el7.noarch ---> Package python36-pysocks.noarch 0:1.6.8-6.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-pysocks-1.6.8-6.el7.noarch ---> Package python36-requests.noarch 0:2.12.5-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-requests-2.12.5-3.el7.noarch ---> Package python36-rpm.x86_64 0:4.11.3-4.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-rpm-4.11.3-4.el7.x86_64 --> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-rpm-4.11.3-4.el7.x86_64 ---> Package python36-setuptools.noarch 0:39.2.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-setuptools-39.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-setuptools-39.2.0-3.el7.noarch ---> Package python36-six.noarch 0:1.11.0-3.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-six-1.11.0-3.el7.noarch ---> Package python36-urllib3.noarch 0:1.19.1-5.el7 will be installed --> Processing Dependency: python(abi) = 3.6 for package: python36-urllib3-1.19.1-5.el7.noarch ---> Package trousers.x86_64 0:0.3.14-2.el7 will be installed --> Processing Dependency: /usr/bin/python3.6 for package: mock-1.4.16-1.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-setuptools-39.2.0-3.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-chardet-2.3.0-6.el7.noarch --> Processing Dependency: /usr/bin/python3.6 for package: python36-distro-1.2.0-3.el7.noarch --> Finished Dependency Resolution Error: Package: python36-distro-1.2.0-3.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: mock-1.4.16-1.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-urllib3-1.19.1-5.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-distro-1.2.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-markupsafe-0.23-3.el7.x86_64 (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-jinja2-2.8.1-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-six-1.11.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-rpm-4.11.3-4.el7.x86_64 (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: mock-1.4.16-1.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-pyroute2-0.4.13-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-chardet-2.3.0-6.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-pysocks-1.6.8-6.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-idna-2.7-2.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-requests-2.12.5-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-rpm-4.11.3-4.el7.x86_64 (epel) Requires: libpython3.6m.so.1.0()(64bit) You could try using --skip-broken to work around the problem Error: Package: python36-setuptools-39.2.0-3.el7.noarch (epel) Requires: /usr/bin/python3.6 Error: Package: python36-setuptools-39.2.0-3.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-chardet-2.3.0-6.el7.noarch (epel) Requires: python(abi) = 3.6 Installed: python-2.7.5-80.el7_6.x86_64 (@updates) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-76.el7.x86_64 (base) python(abi) = 2.7 python(abi) = 2.7 Available: python-2.7.5-77.el7_6.x86_64 (updates) python(abi) = 2.7 python(abi) = 2.7 Available: python34-3.4.10-2.el7.x86_64 (epel) python(abi) = 3.4 Error: Package: python36-markupsafe-0.23-3.el7.x86_64 (epel) Requires: libpython3.6m.so.1.0()(64bit) You could try running: rpm -Va --nofiles --nodigest Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4396190683254499654.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b4856cb8 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 233 | n42.dusty | 172.19.2.106 | dusty | 3890 | Deployed | b4856cb8 | None | None | 7 | x86_64 | 1 | 2410 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 12 00:41:01 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:41:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #257 In-Reply-To: <874528101.2929.1565484051850.JavaMail.jenkins@jenkins.ci.centos.org> References: <874528101.2929.1565484051850.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1359260898.3002.1565570461542.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.99 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 12 August 2019 01:40:17 +0100 (0:00:00.302) 0:03:02.716 ********* TASK [container-engine/docker : check length of search domains] **************** Monday 12 August 2019 01:40:18 +0100 (0:00:00.298) 0:03:03.015 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Monday 12 August 2019 01:40:18 +0100 (0:00:00.290) 0:03:03.306 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 12 August 2019 01:40:18 +0100 (0:00:00.287) 0:03:03.593 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 12 August 2019 01:40:19 +0100 (0:00:00.654) 0:03:04.248 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 12 August 2019 01:40:20 +0100 (0:00:01.289) 0:03:05.538 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 12 August 2019 01:40:20 +0100 (0:00:00.301) 0:03:05.839 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 12 August 2019 01:40:21 +0100 (0:00:00.257) 0:03:06.097 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 12 August 2019 01:40:21 +0100 (0:00:00.305) 0:03:06.403 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 12 August 2019 01:40:21 +0100 (0:00:00.322) 0:03:06.726 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 12 August 2019 01:40:22 +0100 (0:00:00.288) 0:03:07.014 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 12 August 2019 01:40:22 +0100 (0:00:00.284) 0:03:07.299 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 12 August 2019 01:40:22 +0100 (0:00:00.295) 0:03:07.594 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 12 August 2019 01:40:23 +0100 (0:00:00.303) 0:03:07.898 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 12 August 2019 01:40:23 +0100 (0:00:00.371) 0:03:08.269 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 12 August 2019 01:40:23 +0100 (0:00:00.386) 0:03:08.656 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 12 August 2019 01:40:24 +0100 (0:00:00.295) 0:03:08.951 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 12 August 2019 01:40:24 +0100 (0:00:00.283) 0:03:09.235 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 12 August 2019 01:40:24 +0100 (0:00:00.298) 0:03:09.533 ********* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 12 August 2019 01:40:26 +0100 (0:00:01.981) 0:03:11.515 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 12 August 2019 01:40:27 +0100 (0:00:01.170) 0:03:12.685 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 12 August 2019 01:40:28 +0100 (0:00:00.342) 0:03:13.027 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 12 August 2019 01:40:29 +0100 (0:00:00.970) 0:03:13.998 ********* TASK [container-engine/docker : get systemd version] *************************** Monday 12 August 2019 01:40:29 +0100 (0:00:00.306) 0:03:14.305 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 12 August 2019 01:40:29 +0100 (0:00:00.304) 0:03:14.609 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 12 August 2019 01:40:30 +0100 (0:00:00.317) 0:03:14.927 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 12 August 2019 01:40:32 +0100 (0:00:02.323) 0:03:17.250 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 12 August 2019 01:40:34 +0100 (0:00:02.166) 0:03:19.417 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 12 August 2019 01:40:34 +0100 (0:00:00.314) 0:03:19.731 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 12 August 2019 01:40:35 +0100 (0:00:00.236) 0:03:19.968 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 12 August 2019 01:40:36 +0100 (0:00:01.888) 0:03:21.857 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 12 August 2019 01:40:38 +0100 (0:00:01.116) 0:03:22.974 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 12 August 2019 01:40:38 +0100 (0:00:00.284) 0:03:23.258 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 12 August 2019 01:40:42 +0100 (0:00:04.193) 0:03:27.451 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 12 August 2019 01:40:52 +0100 (0:00:10.230) 0:03:37.682 ********* changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 12 August 2019 01:40:54 +0100 (0:00:01.215) 0:03:38.898 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 12 August 2019 01:40:55 +0100 (0:00:01.463) 0:03:40.362 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 12 August 2019 01:40:56 +0100 (0:00:00.551) 0:03:40.913 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 12 August 2019 01:40:57 +0100 (0:00:01.129) 0:03:42.043 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 12 August 2019 01:40:58 +0100 (0:00:01.017) 0:03:43.061 ********* TASK [download : Download items] *********************************************** Monday 12 August 2019 01:40:58 +0100 (0:00:00.117) 0:03:43.179 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 12 August 2019 01:41:01 +0100 (0:00:02.772) 0:03:45.951 ********* =============================================================================== Install packages ------------------------------------------------------- 35.97s Wait for host to be available ------------------------------------------ 23.86s gather facts from all instances ---------------------------------------- 16.99s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 6.04s container-engine/docker : Docker | reload docker ------------------------ 4.19s kubernetes/preinstall : Create kubernetes directories ------------------- 4.04s download : Download items ----------------------------------------------- 2.77s Load required kernel modules -------------------------------------------- 2.60s Extend root VG ---------------------------------------------------------- 2.60s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.54s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.47s kubernetes/preinstall : Create cni directories -------------------------- 2.40s container-engine/docker : Write docker options systemd drop-in ---------- 2.32s container-engine/docker : Write docker dns systemd drop-in -------------- 2.17s download : Sync container ----------------------------------------------- 2.15s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.14s download : Download items ----------------------------------------------- 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.00s Gathering Facts --------------------------------------------------------- 2.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 12 00:52:58 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:52:58 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10835 - Failure! (master on CentOS-7/x86_64) Message-ID: <705312214.3004.1565571179091.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10835 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10835/ to view the results. From ci at centos.org Mon Aug 12 00:53:06 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:53:06 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10838 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <2028294964.3006.1565571187069.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10838 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10838/ to view the results. From ci at centos.org Mon Aug 12 00:55:52 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:55:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10840 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <1031127486.3008.1565571353322.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10840 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10840/ to view the results. From ci at centos.org Mon Aug 12 00:58:02 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 00:58:02 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10841 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1031127486.3008.1565571353322.JavaMail.jenkins@jenkins.ci.centos.org> References: <1031127486.3008.1565571353322.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1209129270.3010.1565571482601.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10841 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10841/ to view the results. From ci at centos.org Mon Aug 12 01:10:45 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 12 Aug 2019 01:10:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #282 In-Reply-To: <1211552334.2946.1565485994296.JavaMail.jenkins@jenkins.ci.centos.org> References: <1211552334.2946.1565485994296.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1756120321.3014.1565572245673.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.28 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 13 00:16:53 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 13 Aug 2019 00:16:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #454 In-Reply-To: <2074973828.3001.1565568644376.JavaMail.jenkins@jenkins.ci.centos.org> References: <2074973828.3001.1565568644376.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <36658998.3152.1565655413726.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.00 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2030 100 8513k 100 8513k 0 0 13.9M 0 --:--:-- --:--:-- --:--:-- 13.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2051 0 --:--:-- --:--:-- --:--:-- 2049 88 38.3M 88 34.0M 0 0 32.6M 0 0:00:01 0:00:01 --:--:-- 32.6M100 38.3M 100 38.3M 0 0 34.5M 0 0:00:01 0:00:01 --:--:-- 63.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 570 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1656 0 --:--:-- --:--:-- --:--:-- 1656 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 15.2M 0 --:--:-- --:--:-- --:--:-- 56.1M ~/nightlyrpmwudTjG/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmwudTjG/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmwudTjG/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmwudTjG ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmwudTjG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmwudTjG/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 680e1b1c3d9c4e79868164795251bc76 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.vliuyuhz:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3336490784639703460.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 2a563e9b +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 140 | n13.crusty | 172.19.2.13 | crusty | 3894 | Deployed | 2a563e9b | None | None | 7 | x86_64 | 1 | 2120 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 13 00:40:57 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 13 Aug 2019 00:40:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #258 In-Reply-To: <1359260898.3002.1565570461542.JavaMail.jenkins@jenkins.ci.centos.org> References: <1359260898.3002.1565570461542.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <122379718.3153.1565656858026.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.04 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 13 August 2019 01:40:14 +0100 (0:00:00.309) 0:03:03.864 ******** TASK [container-engine/docker : check length of search domains] **************** Tuesday 13 August 2019 01:40:14 +0100 (0:00:00.303) 0:03:04.168 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 13 August 2019 01:40:15 +0100 (0:00:00.309) 0:03:04.477 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 13 August 2019 01:40:15 +0100 (0:00:00.284) 0:03:04.762 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 13 August 2019 01:40:16 +0100 (0:00:00.602) 0:03:05.364 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 13 August 2019 01:40:17 +0100 (0:00:01.327) 0:03:06.692 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 13 August 2019 01:40:17 +0100 (0:00:00.282) 0:03:06.974 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 13 August 2019 01:40:18 +0100 (0:00:00.262) 0:03:07.237 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 13 August 2019 01:40:18 +0100 (0:00:00.312) 0:03:07.549 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 13 August 2019 01:40:18 +0100 (0:00:00.308) 0:03:07.858 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 13 August 2019 01:40:19 +0100 (0:00:00.400) 0:03:08.259 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 13 August 2019 01:40:19 +0100 (0:00:00.302) 0:03:08.561 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 13 August 2019 01:40:19 +0100 (0:00:00.282) 0:03:08.843 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 13 August 2019 01:40:19 +0100 (0:00:00.293) 0:03:09.137 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 13 August 2019 01:40:20 +0100 (0:00:00.386) 0:03:09.523 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 13 August 2019 01:40:20 +0100 (0:00:00.357) 0:03:09.880 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 13 August 2019 01:40:20 +0100 (0:00:00.278) 0:03:10.158 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 13 August 2019 01:40:21 +0100 (0:00:00.275) 0:03:10.434 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 13 August 2019 01:40:21 +0100 (0:00:00.296) 0:03:10.731 ******** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 13 August 2019 01:40:23 +0100 (0:00:01.994) 0:03:12.726 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 13 August 2019 01:40:24 +0100 (0:00:00.995) 0:03:13.721 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 13 August 2019 01:40:24 +0100 (0:00:00.311) 0:03:14.033 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 13 August 2019 01:40:25 +0100 (0:00:01.041) 0:03:15.074 ******** TASK [container-engine/docker : get systemd version] *************************** Tuesday 13 August 2019 01:40:26 +0100 (0:00:00.305) 0:03:15.380 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 13 August 2019 01:40:26 +0100 (0:00:00.308) 0:03:15.689 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 13 August 2019 01:40:26 +0100 (0:00:00.305) 0:03:15.995 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 13 August 2019 01:40:29 +0100 (0:00:02.323) 0:03:18.318 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 13 August 2019 01:40:31 +0100 (0:00:02.291) 0:03:20.610 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 13 August 2019 01:40:31 +0100 (0:00:00.321) 0:03:20.931 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 13 August 2019 01:40:31 +0100 (0:00:00.240) 0:03:21.171 ******** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 13 August 2019 01:40:33 +0100 (0:00:01.903) 0:03:23.075 ******** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 13 August 2019 01:40:34 +0100 (0:00:01.101) 0:03:24.177 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 13 August 2019 01:40:35 +0100 (0:00:00.288) 0:03:24.465 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 13 August 2019 01:40:39 +0100 (0:00:04.075) 0:03:28.541 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 13 August 2019 01:40:49 +0100 (0:00:10.221) 0:03:38.762 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 13 August 2019 01:40:50 +0100 (0:00:01.234) 0:03:39.997 ******** ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 13 August 2019 01:40:52 +0100 (0:00:01.414) 0:03:41.411 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 13 August 2019 01:40:52 +0100 (0:00:00.566) 0:03:41.977 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 13 August 2019 01:40:53 +0100 (0:00:00.980) 0:03:42.958 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 13 August 2019 01:40:54 +0100 (0:00:00.908) 0:03:43.867 ******** TASK [download : Download items] *********************************************** Tuesday 13 August 2019 01:40:54 +0100 (0:00:00.108) 0:03:43.975 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 13 August 2019 01:40:57 +0100 (0:00:02.780) 0:03:46.756 ******** =============================================================================== Install packages ------------------------------------------------------- 37.50s Wait for host to be available ------------------------------------------ 21.50s gather facts from all instances ---------------------------------------- 16.82s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.06s kubernetes/preinstall : Create kubernetes directories ------------------- 4.22s container-engine/docker : Docker | reload docker ------------------------ 4.08s download : Download items ----------------------------------------------- 2.78s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.72s Load required kernel modules -------------------------------------------- 2.64s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.53s Extend root VG ---------------------------------------------------------- 2.40s container-engine/docker : Write docker options systemd drop-in ---------- 2.32s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.32s container-engine/docker : Write docker dns systemd drop-in -------------- 2.29s download : Sync container ----------------------------------------------- 2.21s kubernetes/preinstall : Set selinux policy ------------------------------ 2.09s Gathering Facts --------------------------------------------------------- 2.08s download : Download items ----------------------------------------------- 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 13 01:23:58 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 13 Aug 2019 01:23:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #283 In-Reply-To: <1756120321.3014.1565572245673.JavaMail.jenkins@jenkins.ci.centos.org> References: <1756120321.3014.1565572245673.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <662886835.3158.1565659438202.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.23 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 14 00:17:57 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 14 Aug 2019 00:17:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #455 In-Reply-To: <36658998.3152.1565655413726.JavaMail.jenkins@jenkins.ci.centos.org> References: <36658998.3152.1565655413726.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1722429470.3234.1565741877050.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 40.30 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1934 0 --:--:-- --:--:-- --:--:-- 1945 100 8513k 100 8513k 0 0 13.2M 0 --:--:-- --:--:-- --:--:-- 13.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2084 0 --:--:-- --:--:-- --:--:-- 2083 100 38.3M 100 38.3M 0 0 45.8M 0 --:--:-- --:--:-- --:--:-- 45.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 542 0 --:--:-- --:--:-- --:--:-- 544 0 0 0 620 0 0 1737 0 --:--:-- --:--:-- --:--:-- 1737 100 10.7M 100 10.7M 0 0 16.2M 0 --:--:-- --:--:-- --:--:-- 16.2M ~/nightlyrpmSBZ1Wt/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmSBZ1Wt/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmSBZ1Wt/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmSBZ1Wt ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmSBZ1Wt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmSBZ1Wt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 3 minutes 10 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M a6f3787f8bed4b6e8325c32f8d7980b3 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.5i1sqof3:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3617751587552137798.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 6aa9d978 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 155 | n28.crusty | 172.19.2.28 | crusty | 3898 | Deployed | 6aa9d978 | None | None | 7 | x86_64 | 1 | 2270 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 14 00:40:56 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 14 Aug 2019 00:40:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #259 In-Reply-To: <122379718.3153.1565656858026.JavaMail.jenkins@jenkins.ci.centos.org> References: <122379718.3153.1565656858026.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1616209834.3235.1565743256883.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.02 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 14 August 2019 01:40:13 +0100 (0:00:00.308) 0:03:04.913 ****** TASK [container-engine/docker : check length of search domains] **************** Wednesday 14 August 2019 01:40:14 +0100 (0:00:00.307) 0:03:05.220 ****** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 14 August 2019 01:40:14 +0100 (0:00:00.368) 0:03:05.589 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 14 August 2019 01:40:14 +0100 (0:00:00.298) 0:03:05.888 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 14 August 2019 01:40:15 +0100 (0:00:00.606) 0:03:06.494 ****** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 14 August 2019 01:40:16 +0100 (0:00:01.413) 0:03:07.908 ****** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 14 August 2019 01:40:17 +0100 (0:00:00.278) 0:03:08.186 ****** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 14 August 2019 01:40:17 +0100 (0:00:00.253) 0:03:08.440 ****** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 14 August 2019 01:40:17 +0100 (0:00:00.312) 0:03:08.753 ****** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 14 August 2019 01:40:17 +0100 (0:00:00.323) 0:03:09.076 ****** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 14 August 2019 01:40:18 +0100 (0:00:00.282) 0:03:09.359 ****** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 14 August 2019 01:40:18 +0100 (0:00:00.284) 0:03:09.643 ****** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 14 August 2019 01:40:18 +0100 (0:00:00.347) 0:03:09.990 ****** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 14 August 2019 01:40:19 +0100 (0:00:00.292) 0:03:10.283 ****** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 14 August 2019 01:40:19 +0100 (0:00:00.379) 0:03:10.662 ****** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 14 August 2019 01:40:19 +0100 (0:00:00.339) 0:03:11.002 ****** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 14 August 2019 01:40:20 +0100 (0:00:00.278) 0:03:11.281 ****** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 14 August 2019 01:40:20 +0100 (0:00:00.284) 0:03:11.566 ****** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 14 August 2019 01:40:20 +0100 (0:00:00.294) 0:03:11.860 ****** ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 14 August 2019 01:40:22 +0100 (0:00:02.051) 0:03:13.912 ****** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 14 August 2019 01:40:23 +0100 (0:00:01.115) 0:03:15.027 ****** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 14 August 2019 01:40:24 +0100 (0:00:00.296) 0:03:15.323 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 14 August 2019 01:40:25 +0100 (0:00:00.914) 0:03:16.238 ****** TASK [container-engine/docker : get systemd version] *************************** Wednesday 14 August 2019 01:40:25 +0100 (0:00:00.318) 0:03:16.556 ****** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 14 August 2019 01:40:25 +0100 (0:00:00.329) 0:03:16.886 ****** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 14 August 2019 01:40:26 +0100 (0:00:00.330) 0:03:17.217 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 14 August 2019 01:40:28 +0100 (0:00:02.288) 0:03:19.506 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 14 August 2019 01:40:30 +0100 (0:00:02.213) 0:03:21.719 ****** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 14 August 2019 01:40:30 +0100 (0:00:00.306) 0:03:22.026 ****** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 14 August 2019 01:40:31 +0100 (0:00:00.237) 0:03:22.263 ****** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 14 August 2019 01:40:33 +0100 (0:00:01.977) 0:03:24.240 ****** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 14 August 2019 01:40:34 +0100 (0:00:01.049) 0:03:25.290 ****** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 14 August 2019 01:40:34 +0100 (0:00:00.319) 0:03:25.610 ****** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 14 August 2019 01:40:38 +0100 (0:00:03.967) 0:03:29.577 ****** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 14 August 2019 01:40:48 +0100 (0:00:10.188) 0:03:39.765 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 14 August 2019 01:40:49 +0100 (0:00:01.295) 0:03:41.061 ****** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 14 August 2019 01:40:51 +0100 (0:00:01.198) 0:03:42.259 ****** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 14 August 2019 01:40:51 +0100 (0:00:00.519) 0:03:42.779 ****** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 14 August 2019 01:40:52 +0100 (0:00:01.075) 0:03:43.854 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 14 August 2019 01:40:53 +0100 (0:00:00.934) 0:03:44.789 ****** TASK [download : Download items] *********************************************** Wednesday 14 August 2019 01:40:53 +0100 (0:00:00.137) 0:03:44.927 ****** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 14 August 2019 01:40:56 +0100 (0:00:02.748) 0:03:47.675 ****** =============================================================================== Install packages ------------------------------------------------------- 35.67s Wait for host to be available ------------------------------------------ 23.96s gather facts from all instances ---------------------------------------- 17.49s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.03s kubernetes/preinstall : Create kubernetes directories ------------------- 4.16s container-engine/docker : Docker | reload docker ------------------------ 3.97s download : Download items ----------------------------------------------- 2.75s Load required kernel modules -------------------------------------------- 2.68s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.62s kubernetes/preinstall : Create cni directories -------------------------- 2.49s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.48s Extend root VG ---------------------------------------------------------- 2.37s container-engine/docker : Write docker options systemd drop-in ---------- 2.29s container-engine/docker : Write docker dns systemd drop-in -------------- 2.21s download : Sync container ----------------------------------------------- 2.15s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.13s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.12s download : Download items ----------------------------------------------- 2.12s container-engine/docker : ensure service is started if docker packages are already present --- 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 14 01:26:36 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 14 Aug 2019 01:26:36 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #284 In-Reply-To: <662886835.3158.1565659438202.JavaMail.jenkins@jenkins.ci.centos.org> References: <662886835.3158.1565659438202.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1099610372.3239.1565745996126.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 58.59 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 15 00:13:17 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:13:17 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #456 In-Reply-To: <1722429470.3234.1565741877050.JavaMail.jenkins@jenkins.ci.centos.org> References: <1722429470.3234.1565741877050.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <288982576.3475.1565827997896.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 52.20 KB...] http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.ci.centos.org/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.vcu.edu/pub/gnu%2Blinux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://epel.mirror.constant.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.mit.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.uic.edu/EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. One of the configured repositories failed (Extra Packages for Enterprise Linux 7 - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=epel ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable epel or subscription-manager repos --disable=epel 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=epel.skip_if_unavailable=true failure: repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try. http://mirror.ci.centos.org/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.vcu.edu/pub/gnu+linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://epel.mirror.constant.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.mit.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.uic.edu/EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5755874157596443902.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 908e3836 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 113 | n49.pufty | 172.19.3.113 | pufty | 3902 | Deployed | 908e3836 | None | None | 7 | x86_64 | 1 | 2480 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 15 00:40:59 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:40:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #260 In-Reply-To: <1616209834.3235.1565743256883.JavaMail.jenkins@jenkins.ci.centos.org> References: <1616209834.3235.1565743256883.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <456876286.3480.1565829659480.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.03 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 15 August 2019 01:40:15 +0100 (0:00:00.291) 0:03:01.973 ******* TASK [container-engine/docker : check length of search domains] **************** Thursday 15 August 2019 01:40:15 +0100 (0:00:00.297) 0:03:02.270 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 15 August 2019 01:40:16 +0100 (0:00:00.302) 0:03:02.573 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 15 August 2019 01:40:16 +0100 (0:00:00.299) 0:03:02.873 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 15 August 2019 01:40:17 +0100 (0:00:00.657) 0:03:03.530 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 15 August 2019 01:40:18 +0100 (0:00:01.368) 0:03:04.899 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 15 August 2019 01:40:18 +0100 (0:00:00.262) 0:03:05.162 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 15 August 2019 01:40:19 +0100 (0:00:00.257) 0:03:05.420 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 15 August 2019 01:40:19 +0100 (0:00:00.312) 0:03:05.732 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 15 August 2019 01:40:19 +0100 (0:00:00.306) 0:03:06.039 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 15 August 2019 01:40:19 +0100 (0:00:00.280) 0:03:06.320 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 15 August 2019 01:40:20 +0100 (0:00:00.298) 0:03:06.618 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 15 August 2019 01:40:20 +0100 (0:00:00.299) 0:03:06.918 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 15 August 2019 01:40:20 +0100 (0:00:00.297) 0:03:07.215 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 15 August 2019 01:40:21 +0100 (0:00:00.363) 0:03:07.579 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 15 August 2019 01:40:21 +0100 (0:00:00.341) 0:03:07.921 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 15 August 2019 01:40:21 +0100 (0:00:00.293) 0:03:08.214 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 15 August 2019 01:40:22 +0100 (0:00:00.275) 0:03:08.490 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 15 August 2019 01:40:22 +0100 (0:00:00.289) 0:03:08.779 ******* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 15 August 2019 01:40:24 +0100 (0:00:02.026) 0:03:10.806 ******* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 15 August 2019 01:40:25 +0100 (0:00:01.119) 0:03:11.925 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 15 August 2019 01:40:25 +0100 (0:00:00.301) 0:03:12.227 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 15 August 2019 01:40:26 +0100 (0:00:01.088) 0:03:13.315 ******* TASK [container-engine/docker : get systemd version] *************************** Thursday 15 August 2019 01:40:27 +0100 (0:00:00.339) 0:03:13.655 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 15 August 2019 01:40:27 +0100 (0:00:00.379) 0:03:14.034 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 15 August 2019 01:40:27 +0100 (0:00:00.311) 0:03:14.345 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 15 August 2019 01:40:30 +0100 (0:00:02.393) 0:03:16.739 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 15 August 2019 01:40:32 +0100 (0:00:02.159) 0:03:18.898 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 15 August 2019 01:40:32 +0100 (0:00:00.357) 0:03:19.256 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 15 August 2019 01:40:33 +0100 (0:00:00.272) 0:03:19.528 ******* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 15 August 2019 01:40:35 +0100 (0:00:01.965) 0:03:21.494 ******* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 15 August 2019 01:40:36 +0100 (0:00:01.251) 0:03:22.745 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 15 August 2019 01:40:36 +0100 (0:00:00.281) 0:03:23.027 ******* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 15 August 2019 01:40:40 +0100 (0:00:04.111) 0:03:27.138 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 15 August 2019 01:40:50 +0100 (0:00:10.215) 0:03:37.354 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 15 August 2019 01:40:52 +0100 (0:00:01.308) 0:03:38.663 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 15 August 2019 01:40:53 +0100 (0:00:01.164) 0:03:39.828 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 15 August 2019 01:40:53 +0100 (0:00:00.504) 0:03:40.332 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 15 August 2019 01:40:54 +0100 (0:00:01.042) 0:03:41.374 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 15 August 2019 01:40:56 +0100 (0:00:01.114) 0:03:42.489 ******* TASK [download : Download items] *********************************************** Thursday 15 August 2019 01:40:56 +0100 (0:00:00.133) 0:03:42.622 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 15 August 2019 01:40:58 +0100 (0:00:02.756) 0:03:45.379 ******* =============================================================================== Install packages ------------------------------------------------------- 35.40s Wait for host to be available ------------------------------------------ 21.32s gather facts from all instances ---------------------------------------- 17.31s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.25s container-engine/docker : Docker | reload docker ------------------------ 4.11s kubernetes/preinstall : Create kubernetes directories ------------------- 3.91s download : Download items ----------------------------------------------- 2.76s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.67s Load required kernel modules -------------------------------------------- 2.61s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.52s Extend root VG ---------------------------------------------------------- 2.48s container-engine/docker : Write docker options systemd drop-in ---------- 2.39s kubernetes/preinstall : Create cni directories -------------------------- 2.39s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.31s Gathering Facts --------------------------------------------------------- 2.19s container-engine/docker : Write docker dns systemd drop-in -------------- 2.16s download : Sync container ----------------------------------------------- 2.14s kubernetes/preinstall : Set selinux policy ------------------------------ 2.07s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.06s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 15 00:54:58 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:54:58 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10859 - Failure! (master on CentOS-7/x86_64) Message-ID: <962449923.3485.1565830498419.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10859 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10859/ to view the results. From ci at centos.org Thu Aug 15 00:55:11 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:55:11 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10862 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <1732397674.3487.1565830512103.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10862 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10862/ to view the results. From ci at centos.org Thu Aug 15 00:56:44 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:56:44 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10861 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <610181272.3489.1565830604568.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10861 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10861/ to view the results. From ci at centos.org Thu Aug 15 00:57:08 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:57:08 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10860 - Still Failing! (master on CentOS-6/x86_64) In-Reply-To: <962449923.3485.1565830498419.JavaMail.jenkins@jenkins.ci.centos.org> References: <962449923.3485.1565830498419.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2058168189.3492.1565830628982.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10860 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10860/ to view the results. From ci at centos.org Thu Aug 15 00:59:44 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 00:59:44 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10864 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <1097457412.3496.1565830785075.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10864 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10864/ to view the results. From ci at centos.org Thu Aug 15 01:01:34 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 01:01:34 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10863 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1732397674.3487.1565830512103.JavaMail.jenkins@jenkins.ci.centos.org> References: <1732397674.3487.1565830512103.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <778997796.3498.1565830894499.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10863 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10863/ to view the results. From ci at centos.org Thu Aug 15 01:01:52 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 01:01:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10865 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1097457412.3496.1565830785075.JavaMail.jenkins@jenkins.ci.centos.org> References: <1097457412.3496.1565830785075.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1322417003.3500.1565830912520.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10865 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10865/ to view the results. From ci at centos.org Thu Aug 15 01:03:04 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 01:03:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #285 In-Reply-To: <1099610372.3239.1565745996126.JavaMail.jenkins@jenkins.ci.centos.org> References: <1099610372.3239.1565745996126.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1142480107.3501.1565830984524.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 59.61 KB...] https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. One of the configured repositories failed (Extra Packages for Enterprise Linux 7 - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=epel ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable epel or subscription-manager repos --disable=epel 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=epel.skip_if_unavailable=true failure: repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try. http://mirror.ci.centos.org/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.vcu.edu/pub/gnu+linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://epel.mirror.constant.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.mit.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.seas.harvard.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://ftp.cse.buffalo.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.uic.edu/EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.grid.uchicago.edu/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.prgmr.com/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 8: virtualenv: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 9: env/bin/activate: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 12: pip: command not found Loaded plugins: fastestmirror adding repo from: https://download.docker.com/linux/centos/docker-ce.repo grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo repo saved to /etc/yum.repos.d/docker-ce.repo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror4.ci.centos.org * epel: mirror.ci.centos.org * extras: mirror4.ci.centos.org * updates: mirror4.ci.centos.org http://mirror.ci.centos.org/epel/7/x86_64/repodata/f89cb839b944f60269a83a692363c765118abb44625a2b61cdbfb37252003fcc-updateinfo.xml.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below wiki article https://wiki.centos.org/yum-errors If above article doesn't help to resolve this issue please use https://bugs.centos.org/. Resolving Dependencies --> Running transaction check ---> Package docker-ce.x86_64 3:19.03.1-3.el7 will be installed --> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Processing Dependency: containerd.io >= 1.2.2-3 for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Processing Dependency: docker-ce-cli for package: 3:docker-ce-19.03.1-3.el7.x86_64 --> Running transaction check ---> Package container-selinux.noarch 2:2.107-1.el7_6 will be installed ---> Package containerd.io.x86_64 0:1.2.6-3.3.el7 will be installed ---> Package docker-ce-cli.x86_64 1:19.03.1-3.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: docker-ce x86_64 3:19.03.1-3.el7 docker-ce-stable 24 M Installing for dependencies: container-selinux noarch 2:2.107-1.el7_6 extras 39 k containerd.io x86_64 1.2.6-3.3.el7 docker-ce-stable 26 M docker-ce-cli x86_64 1:19.03.1-3.el7 docker-ce-stable 39 M Transaction Summary ================================================================================ Install 1 Package (+3 Dependent packages) Total download size: 90 M Installed size: 368 M Downloading packages: warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Public key for containerd.io-1.2.6-3.3.el7.x86_64.rpm is not installed -------------------------------------------------------------------------------- Total 63 MB/s | 90 MB 00:01 Retrieving key from https://download.docker.com/linux/centos/gpg Importing GPG key 0x621E9F35: Userid : "Docker Release (CE rpm) " Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35 From : https://download.docker.com/linux/centos/gpg Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 2:container-selinux-2.107-1.el7_6.noarch 1/4 Installing : containerd.io-1.2.6-3.3.el7.x86_64 2/4 Installing : 1:docker-ce-cli-19.03.1-3.el7.x86_64 3/4 Installing : 3:docker-ce-19.03.1-3.el7.x86_64 4/4 Verifying : 1:docker-ce-cli-19.03.1-3.el7.x86_64 1/4 Verifying : 3:docker-ce-19.03.1-3.el7.x86_64 2/4 Verifying : containerd.io-1.2.6-3.3.el7.x86_64 3/4 Verifying : 2:container-selinux-2.107-1.el7_6.noarch 4/4 Installed: docker-ce.x86_64 3:19.03.1-3.el7 Dependency Installed: container-selinux.noarch 2:2.107-1.el7_6 containerd.io.x86_64 0:1.2.6-3.3.el7 docker-ce-cli.x86_64 1:19.03.1-3.el7 Complete! Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. ./gluster-ansible-infra/tests/run-centos-ci.sh: line 26: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 27: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 30: molecule: command not found ./gluster-ansible-infra/tests/run-centos-ci.sh: line 31: molecule: command not found Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 15 01:04:02 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 15 Aug 2019 01:04:02 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10866 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1322417003.3500.1565830912520.JavaMail.jenkins@jenkins.ci.centos.org> References: <1322417003.3500.1565830912520.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <446384504.3503.1565831042988.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10866 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10866/ to view the results. From ci at centos.org Fri Aug 16 00:16:29 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:16:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #457 In-Reply-To: <288982576.3475.1565827997896.JavaMail.jenkins@jenkins.ci.centos.org> References: <288982576.3475.1565827997896.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1558292197.3683.1565914590003.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 74.11 KB...] Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.clarkson.edu/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora-epel.mirror.iweb.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.ucr.ac.cr/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.ci.centos.org/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.vcu.edu/pub/gnu%2Blinux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') Trying other mirror. http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.uic.edu/EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.rit.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.mit.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.oss.ou.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora-epel.mirror.lstn.net/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.clarkson.edu/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora-epel.mirror.iweb.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirrors.ucr.ac.cr/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. One of the configured repositories failed (epel), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=epel ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable epel or subscription-manager repos --disable=epel 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=epel.skip_if_unavailable=true failure: repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2 from epel: [Errno 256] No more mirrors to try. http://mirror.ci.centos.org/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.vcu.edu/pub/gnu+linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 12] Timeout on http://csc.mcs.sdsmt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: (28, 'Connection timed out after 30001 milliseconds') http://mirror.cs.princeton.edu/pub/mirrors/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.uic.edu/EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror2.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.colorado.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://pubmirror1.math.uh.edu/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cs.pitt.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.rit.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.cat.pdx.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.mit.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.umd.edu/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.es.its.nyu.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.oss.ou.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.rnet.missouri.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.us-midwest-1.nexcess.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.nodesdirect.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.sonic.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.mirrors.pair.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://sjc.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirror.cogentco.com/pub/linux/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://ewr.edge.kernel.org/fedora-buffet/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.syringanetworks.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.twinlakes.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dfw.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://ord.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.kernel.org/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.us.leaseweb.net/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.mrjester.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://iad.mirror.rackspace.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://d2lzkl7pfhq30w.cloudfront.net/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora-epel.mirror.lstn.net/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.sjc02.svwh.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.metrocast.net/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.compevo.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.dal.nexril.net/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirrors.xmission.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found https://mirror.clarkson.edu/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://mirrors.liquidweb.com/fedora-epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.coastal.edu/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.steadfastnet.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://dl.fedoraproject.org/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora-epel.mirror.iweb.com/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found http://mirror.its.dal.ca/pub/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found http://fedora.westmancom.com/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found https://mirrors.ucr.ac.cr/epel/7/x86_64/repodata/59cd2d904711571ac63cfec0fec0641233dc027173c6667914bbcc2e10ea11dd-primary.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3162789234440313648.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 6912a594 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 175 | n48.crusty | 172.19.2.48 | crusty | 3906 | Deployed | 6912a594 | None | None | 7 | x86_64 | 1 | 2470 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 16 00:37:12 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:37:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #261 In-Reply-To: <456876286.3480.1565829659480.JavaMail.jenkins@jenkins.ci.centos.org> References: <456876286.3480.1565829659480.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <98204401.3685.1565915832363.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.89 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 16 August 2019 01:36:45 +0100 (0:00:00.133) 0:02:01.421 ********* TASK [container-engine/docker : check length of search domains] **************** Friday 16 August 2019 01:36:45 +0100 (0:00:00.129) 0:02:01.550 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Friday 16 August 2019 01:36:46 +0100 (0:00:00.133) 0:02:01.684 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 16 August 2019 01:36:46 +0100 (0:00:00.133) 0:02:01.817 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 16 August 2019 01:36:46 +0100 (0:00:00.253) 0:02:02.071 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 16 August 2019 01:36:47 +0100 (0:00:00.631) 0:02:02.703 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 16 August 2019 01:36:47 +0100 (0:00:00.113) 0:02:02.816 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 16 August 2019 01:36:47 +0100 (0:00:00.111) 0:02:02.928 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 16 August 2019 01:36:47 +0100 (0:00:00.155) 0:02:03.083 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 16 August 2019 01:36:47 +0100 (0:00:00.137) 0:02:03.220 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 16 August 2019 01:36:47 +0100 (0:00:00.129) 0:02:03.350 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 16 August 2019 01:36:47 +0100 (0:00:00.125) 0:02:03.475 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 16 August 2019 01:36:47 +0100 (0:00:00.140) 0:02:03.616 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 16 August 2019 01:36:48 +0100 (0:00:00.137) 0:02:03.753 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 16 August 2019 01:36:48 +0100 (0:00:00.159) 0:02:03.912 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 16 August 2019 01:36:48 +0100 (0:00:00.151) 0:02:04.064 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 16 August 2019 01:36:48 +0100 (0:00:00.126) 0:02:04.191 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 16 August 2019 01:36:48 +0100 (0:00:00.127) 0:02:04.318 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 16 August 2019 01:36:48 +0100 (0:00:00.125) 0:02:04.444 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 16 August 2019 01:36:49 +0100 (0:00:00.890) 0:02:05.335 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 16 August 2019 01:36:50 +0100 (0:00:00.512) 0:02:05.847 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 16 August 2019 01:36:50 +0100 (0:00:00.125) 0:02:05.973 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 16 August 2019 01:36:50 +0100 (0:00:00.571) 0:02:06.544 ********* TASK [container-engine/docker : get systemd version] *************************** Friday 16 August 2019 01:36:51 +0100 (0:00:00.131) 0:02:06.676 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 16 August 2019 01:36:51 +0100 (0:00:00.132) 0:02:06.809 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 16 August 2019 01:36:51 +0100 (0:00:00.131) 0:02:06.941 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 16 August 2019 01:36:52 +0100 (0:00:01.065) 0:02:08.006 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 16 August 2019 01:36:53 +0100 (0:00:01.054) 0:02:09.060 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 16 August 2019 01:36:53 +0100 (0:00:00.135) 0:02:09.196 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 16 August 2019 01:36:53 +0100 (0:00:00.125) 0:02:09.321 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 16 August 2019 01:36:54 +0100 (0:00:00.895) 0:02:10.216 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 16 August 2019 01:36:55 +0100 (0:00:00.526) 0:02:10.743 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 16 August 2019 01:36:55 +0100 (0:00:00.126) 0:02:10.870 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 16 August 2019 01:36:58 +0100 (0:00:02.986) 0:02:13.856 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 16 August 2019 01:37:08 +0100 (0:00:10.084) 0:02:23.941 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 16 August 2019 01:37:08 +0100 (0:00:00.518) 0:02:24.459 ********* ok: [kube3] => (item=docker) ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 16 August 2019 01:37:09 +0100 (0:00:00.677) 0:02:25.137 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 16 August 2019 01:37:09 +0100 (0:00:00.220) 0:02:25.358 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 16 August 2019 01:37:10 +0100 (0:00:00.470) 0:02:25.828 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 16 August 2019 01:37:10 +0100 (0:00:00.473) 0:02:26.301 ********* TASK [download : Download items] *********************************************** Friday 16 August 2019 01:37:10 +0100 (0:00:00.069) 0:02:26.370 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 16 August 2019 01:37:12 +0100 (0:00:01.352) 0:02:27.723 ********* =============================================================================== Install packages ------------------------------------------------------- 26.13s Wait for host to be available ------------------------------------------ 16.26s Extend root VG --------------------------------------------------------- 15.51s gather facts from all instances ---------------------------------------- 10.31s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s Persist loaded modules -------------------------------------------------- 3.72s container-engine/docker : Docker | reload docker ------------------------ 2.99s kubernetes/preinstall : Create kubernetes directories ------------------- 1.95s Load required kernel modules -------------------------------------------- 1.77s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.53s Extend the root LV and FS to occupy remaining space --------------------- 1.53s download : Download items ----------------------------------------------- 1.35s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.32s download : Sync container ----------------------------------------------- 1.30s Gathering Facts --------------------------------------------------------- 1.25s download : Download items ----------------------------------------------- 1.21s kubernetes/preinstall : Create cni directories -------------------------- 1.20s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.11s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.08s container-engine/docker : Write docker options systemd drop-in ---------- 1.07s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 16 00:53:51 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:53:51 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10870 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <2047318984.3687.1565916832160.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10870 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10870/ to view the results. From ci at centos.org Fri Aug 16 00:55:38 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:55:38 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10867 - Still Failing! (master on CentOS-7/x86_64) In-Reply-To: <446384504.3503.1565831042988.JavaMail.jenkins@jenkins.ci.centos.org> References: <446384504.3503.1565831042988.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <348584928.3690.1565916938804.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10867 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10867/ to view the results. From ci at centos.org Fri Aug 16 00:55:50 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:55:50 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10868 - Still Failing! (master on CentOS-6/x86_64) In-Reply-To: <348584928.3690.1565916938804.JavaMail.jenkins@jenkins.ci.centos.org> References: <348584928.3690.1565916938804.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2083544216.3692.1565916951102.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10868 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10868/ to view the results. From ci at centos.org Fri Aug 16 00:56:38 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:56:38 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10869 - Still Failing! (release-4.1 on CentOS-6/x86_64) In-Reply-To: <2083544216.3692.1565916951102.JavaMail.jenkins@jenkins.ci.centos.org> References: <2083544216.3692.1565916951102.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1701959820.3694.1565916998525.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10869 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10869/ to view the results. From ci at centos.org Fri Aug 16 00:59:27 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:59:27 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10872 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <318413666.3698.1565917167907.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10872 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10872/ to view the results. From ci at centos.org Fri Aug 16 00:59:31 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 00:59:31 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10873 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <318413666.3698.1565917167907.JavaMail.jenkins@jenkins.ci.centos.org> References: <318413666.3698.1565917167907.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <736049530.3700.1565917172173.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10873 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10873/ to view the results. From ci at centos.org Fri Aug 16 01:00:40 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 01:00:40 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10871 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <2047318984.3687.1565916832160.JavaMail.jenkins@jenkins.ci.centos.org> References: <2047318984.3687.1565916832160.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2104851830.3702.1565917240993.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10871 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10871/ to view the results. From ci at centos.org Fri Aug 16 01:01:34 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 01:01:34 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10874 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <736049530.3700.1565917172173.JavaMail.jenkins@jenkins.ci.centos.org> References: <736049530.3700.1565917172173.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <732424389.3704.1565917294610.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10874 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10874/ to view the results. From ci at centos.org Fri Aug 16 01:12:05 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 16 Aug 2019 01:12:05 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #286 In-Reply-To: <1142480107.3501.1565830984524.JavaMail.jenkins@jenkins.ci.centos.org> References: <1142480107.3501.1565830984524.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <953710355.3706.1565917925069.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.43 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 17 00:16:55 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 17 Aug 2019 00:16:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #458 In-Reply-To: <1558292197.3683.1565914590003.JavaMail.jenkins@jenkins.ci.centos.org> References: <1558292197.3683.1565914590003.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2146135621.3826.1566001015405.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.00 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1576 0 --:--:-- --:--:-- --:--:-- 1583 83 8513k 83 7139k 0 0 10.9M 0 --:--:-- --:--:-- --:--:-- 10.9M100 8513k 100 8513k 0 0 12.8M 0 --:--:-- --:--:-- --:--:-- 167M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1914 0 --:--:-- --:--:-- --:--:-- 1917 100 38.3M 100 38.3M 0 0 36.5M 0 0:00:01 0:00:01 --:--:-- 36.5M100 38.3M 100 38.3M 0 0 36.5M 0 0:00:01 0:00:01 --:--:-- 0 Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 571 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1706 0 --:--:-- --:--:-- --:--:-- 1706 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 48.1M ~/nightlyrpmWInnQZ/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmWInnQZ/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmWInnQZ/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmWInnQZ ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmWInnQZ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmWInnQZ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 66d1b6bdfc4d463cb1f3e2a8b34570cf -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.achj20y6:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5171410560103269950.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c67c7778 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 158 | n31.crusty | 172.19.2.31 | crusty | 3910 | Deployed | c67c7778 | None | None | 7 | x86_64 | 1 | 2300 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 17 00:37:11 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 17 Aug 2019 00:37:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #262 In-Reply-To: <98204401.3685.1565915832363.JavaMail.jenkins@jenkins.ci.centos.org> References: <98204401.3685.1565915832363.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <122307535.3827.1566002231171.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.99 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 17 August 2019 01:36:44 +0100 (0:00:00.131) 0:02:01.096 ******* TASK [container-engine/docker : check length of search domains] **************** Saturday 17 August 2019 01:36:44 +0100 (0:00:00.130) 0:02:01.227 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 17 August 2019 01:36:44 +0100 (0:00:00.128) 0:02:01.356 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 17 August 2019 01:36:45 +0100 (0:00:00.127) 0:02:01.483 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 17 August 2019 01:36:45 +0100 (0:00:00.252) 0:02:01.736 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 17 August 2019 01:36:45 +0100 (0:00:00.630) 0:02:02.366 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 17 August 2019 01:36:45 +0100 (0:00:00.112) 0:02:02.479 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 17 August 2019 01:36:46 +0100 (0:00:00.109) 0:02:02.589 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 17 August 2019 01:36:46 +0100 (0:00:00.143) 0:02:02.732 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 17 August 2019 01:36:46 +0100 (0:00:00.136) 0:02:02.869 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 17 August 2019 01:36:46 +0100 (0:00:00.123) 0:02:02.992 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 17 August 2019 01:36:46 +0100 (0:00:00.128) 0:02:03.121 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 17 August 2019 01:36:46 +0100 (0:00:00.131) 0:02:03.252 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 17 August 2019 01:36:46 +0100 (0:00:00.130) 0:02:03.383 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 17 August 2019 01:36:47 +0100 (0:00:00.158) 0:02:03.541 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 17 August 2019 01:36:47 +0100 (0:00:00.154) 0:02:03.695 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 17 August 2019 01:36:47 +0100 (0:00:00.123) 0:02:03.819 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 17 August 2019 01:36:47 +0100 (0:00:00.123) 0:02:03.943 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 17 August 2019 01:36:47 +0100 (0:00:00.126) 0:02:04.070 ******* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 17 August 2019 01:36:48 +0100 (0:00:00.895) 0:02:04.966 ******* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 17 August 2019 01:36:49 +0100 (0:00:00.519) 0:02:05.485 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 17 August 2019 01:36:49 +0100 (0:00:00.128) 0:02:05.613 ******* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 17 August 2019 01:36:49 +0100 (0:00:00.534) 0:02:06.147 ******* TASK [container-engine/docker : get systemd version] *************************** Saturday 17 August 2019 01:36:49 +0100 (0:00:00.133) 0:02:06.281 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 17 August 2019 01:36:49 +0100 (0:00:00.135) 0:02:06.416 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 17 August 2019 01:36:50 +0100 (0:00:00.153) 0:02:06.569 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 17 August 2019 01:36:51 +0100 (0:00:01.046) 0:02:07.616 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 17 August 2019 01:36:52 +0100 (0:00:00.943) 0:02:08.560 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 17 August 2019 01:36:52 +0100 (0:00:00.135) 0:02:08.696 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 17 August 2019 01:36:52 +0100 (0:00:00.108) 0:02:08.804 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 17 August 2019 01:36:53 +0100 (0:00:00.860) 0:02:09.665 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 17 August 2019 01:36:53 +0100 (0:00:00.540) 0:02:10.205 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 17 August 2019 01:36:53 +0100 (0:00:00.132) 0:02:10.338 ******* changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 17 August 2019 01:36:56 +0100 (0:00:03.068) 0:02:13.407 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 17 August 2019 01:37:07 +0100 (0:00:10.102) 0:02:23.510 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 17 August 2019 01:37:07 +0100 (0:00:00.530) 0:02:24.040 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 17 August 2019 01:37:08 +0100 (0:00:00.586) 0:02:24.626 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 17 August 2019 01:37:08 +0100 (0:00:00.222) 0:02:24.848 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 17 August 2019 01:37:08 +0100 (0:00:00.589) 0:02:25.438 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 17 August 2019 01:37:09 +0100 (0:00:00.442) 0:02:25.881 ******* TASK [download : Download items] *********************************************** Saturday 17 August 2019 01:37:09 +0100 (0:00:00.066) 0:02:25.948 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 17 August 2019 01:37:10 +0100 (0:00:01.343) 0:02:27.292 ******* =============================================================================== Install packages ------------------------------------------------------- 27.16s Wait for host to be available ------------------------------------------ 16.18s Extend root VG --------------------------------------------------------- 15.09s gather facts from all instances ---------------------------------------- 10.16s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s Persist loaded modules -------------------------------------------------- 3.47s container-engine/docker : Docker | reload docker ------------------------ 3.07s kubernetes/preinstall : Create kubernetes directories ------------------- 1.87s Load required kernel modules -------------------------------------------- 1.65s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.59s Extend the root LV and FS to occupy remaining space --------------------- 1.45s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.38s download : Download items ----------------------------------------------- 1.34s download : Download items ----------------------------------------------- 1.29s Gathering Facts --------------------------------------------------------- 1.27s download : Sync container ----------------------------------------------- 1.20s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.20s bootstrap-os : check if atomic host ------------------------------------- 1.16s kubernetes/preinstall : Create cni directories -------------------------- 1.14s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.09s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 17 01:18:43 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 17 Aug 2019 01:18:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #287 In-Reply-To: <953710355.3706.1565917925069.JavaMail.jenkins@jenkins.ci.centos.org> References: <953710355.3706.1565917925069.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1223732589.3831.1566004723631.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.28 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 18 00:16:42 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 18 Aug 2019 00:16:42 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #459 In-Reply-To: <2146135621.3826.1566001015405.JavaMail.jenkins@jenkins.ci.centos.org> References: <2146135621.3826.1566001015405.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <234008036.3874.1566087402785.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1823 0 --:--:-- --:--:-- --:--:-- 1827 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 12.3M 0 --:--:-- --:--:-- --:--:-- 29.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2005 0 --:--:-- --:--:-- --:--:-- 2009 38 38.3M 38 14.7M 0 0 21.8M 0 0:00:01 --:--:-- 0:00:01 21.8M100 38.3M 100 38.3M 0 0 36.7M 0 0:00:01 0:00:01 --:--:-- 63.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 575 0 --:--:-- --:--:-- --:--:-- 575 0 0 0 620 0 0 1757 0 --:--:-- --:--:-- --:--:-- 1757 100 10.7M 100 10.7M 0 0 15.7M 0 --:--:-- --:--:-- --:--:-- 15.7M ~/nightlyrpmMcEf6p/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmMcEf6p/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmMcEf6p/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmMcEf6p ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmMcEf6p/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmMcEf6p/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 52c8117ea86e4368a1a83899f502cec8 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.dfaigji8:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2909509478186882227.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ae4e9add +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 103 | n39.pufty | 172.19.3.103 | pufty | 3915 | Deployed | ae4e9add | None | None | 7 | x86_64 | 1 | 2380 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 18 00:41:01 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 18 Aug 2019 00:41:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #263 In-Reply-To: <122307535.3827.1566002231171.JavaMail.jenkins@jenkins.ci.centos.org> References: <122307535.3827.1566002231171.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <100250500.3875.1566088861800.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.04 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 18 August 2019 01:40:18 +0100 (0:00:00.366) 0:03:01.540 ********* TASK [container-engine/docker : check length of search domains] **************** Sunday 18 August 2019 01:40:18 +0100 (0:00:00.323) 0:03:01.864 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 18 August 2019 01:40:18 +0100 (0:00:00.322) 0:03:02.186 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 18 August 2019 01:40:19 +0100 (0:00:00.294) 0:03:02.481 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 18 August 2019 01:40:19 +0100 (0:00:00.550) 0:03:03.031 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 18 August 2019 01:40:20 +0100 (0:00:01.320) 0:03:04.352 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 18 August 2019 01:40:21 +0100 (0:00:00.312) 0:03:04.664 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 18 August 2019 01:40:21 +0100 (0:00:00.264) 0:03:04.928 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 18 August 2019 01:40:21 +0100 (0:00:00.318) 0:03:05.247 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 18 August 2019 01:40:22 +0100 (0:00:00.323) 0:03:05.571 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 18 August 2019 01:40:22 +0100 (0:00:00.278) 0:03:05.850 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 18 August 2019 01:40:22 +0100 (0:00:00.302) 0:03:06.152 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 18 August 2019 01:40:23 +0100 (0:00:00.295) 0:03:06.448 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 18 August 2019 01:40:23 +0100 (0:00:00.291) 0:03:06.739 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 18 August 2019 01:40:23 +0100 (0:00:00.354) 0:03:07.093 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 18 August 2019 01:40:24 +0100 (0:00:00.349) 0:03:07.442 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 18 August 2019 01:40:24 +0100 (0:00:00.327) 0:03:07.770 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 18 August 2019 01:40:24 +0100 (0:00:00.280) 0:03:08.050 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 18 August 2019 01:40:24 +0100 (0:00:00.297) 0:03:08.348 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 18 August 2019 01:40:26 +0100 (0:00:02.057) 0:03:10.405 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 18 August 2019 01:40:28 +0100 (0:00:01.060) 0:03:11.466 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 18 August 2019 01:40:28 +0100 (0:00:00.300) 0:03:11.766 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 18 August 2019 01:40:29 +0100 (0:00:01.075) 0:03:12.842 ********* TASK [container-engine/docker : get systemd version] *************************** Sunday 18 August 2019 01:40:29 +0100 (0:00:00.328) 0:03:13.171 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 18 August 2019 01:40:30 +0100 (0:00:00.318) 0:03:13.489 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 18 August 2019 01:40:30 +0100 (0:00:00.317) 0:03:13.806 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 18 August 2019 01:40:32 +0100 (0:00:02.208) 0:03:16.014 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 18 August 2019 01:40:34 +0100 (0:00:02.133) 0:03:18.148 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 18 August 2019 01:40:35 +0100 (0:00:00.336) 0:03:18.485 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 18 August 2019 01:40:35 +0100 (0:00:00.238) 0:03:18.723 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 18 August 2019 01:40:37 +0100 (0:00:01.971) 0:03:20.695 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 18 August 2019 01:40:38 +0100 (0:00:01.176) 0:03:21.871 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 18 August 2019 01:40:38 +0100 (0:00:00.307) 0:03:22.179 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 18 August 2019 01:40:42 +0100 (0:00:04.176) 0:03:26.355 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 18 August 2019 01:40:53 +0100 (0:00:10.272) 0:03:36.628 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 18 August 2019 01:40:54 +0100 (0:00:01.265) 0:03:37.893 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 18 August 2019 01:40:55 +0100 (0:00:01.326) 0:03:39.220 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 18 August 2019 01:40:56 +0100 (0:00:00.526) 0:03:39.746 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 18 August 2019 01:40:57 +0100 (0:00:01.084) 0:03:40.831 ********* changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 18 August 2019 01:40:58 +0100 (0:00:01.066) 0:03:41.897 ********* TASK [download : Download items] *********************************************** Sunday 18 August 2019 01:40:58 +0100 (0:00:00.130) 0:03:42.028 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 18 August 2019 01:41:01 +0100 (0:00:02.751) 0:03:44.780 ********* =============================================================================== Install packages ------------------------------------------------------- 35.56s Wait for host to be available ------------------------------------------ 21.48s gather facts from all instances ---------------------------------------- 16.16s container-engine/docker : Docker | pause while Docker restarts --------- 10.27s Persist loaded modules -------------------------------------------------- 6.26s kubernetes/preinstall : Create kubernetes directories ------------------- 4.18s container-engine/docker : Docker | reload docker ------------------------ 4.18s download : Download items ----------------------------------------------- 2.75s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.69s Load required kernel modules -------------------------------------------- 2.63s kubernetes/preinstall : Create cni directories -------------------------- 2.53s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.44s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.30s container-engine/docker : Write docker options systemd drop-in ---------- 2.21s container-engine/docker : Write docker dns systemd drop-in -------------- 2.13s Gathering Facts --------------------------------------------------------- 2.13s download : Download items ----------------------------------------------- 2.09s container-engine/docker : ensure service is started if docker packages are already present --- 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 18 01:19:16 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 18 Aug 2019 01:19:16 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #288 In-Reply-To: <1223732589.3831.1566004723631.JavaMail.jenkins@jenkins.ci.centos.org> References: <1223732589.3831.1566004723631.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1013582278.3879.1566091156862.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.65 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 19 00:16:41 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 19 Aug 2019 00:16:41 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #460 In-Reply-To: <234008036.3874.1566087402785.JavaMail.jenkins@jenkins.ci.centos.org> References: <234008036.3874.1566087402785.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1872885795.3926.1566173801590.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.05 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1824 0 --:--:-- --:--:-- --:--:-- 1833 47 8513k 47 4015k 0 0 5906k 0 0:00:01 --:--:-- 0:00:01 5906k100 8513k 100 8513k 0 0 11.5M 0 --:--:-- --:--:-- --:--:-- 112M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2236 0 --:--:-- --:--:-- --:--:-- 2239 100 38.3M 100 38.3M 0 0 48.2M 0 --:--:-- --:--:-- --:--:-- 48.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 512 0 --:--:-- --:--:-- --:--:-- 513 0 0 0 620 0 0 1541 0 --:--:-- --:--:-- --:--:-- 1541 100 10.7M 100 10.7M 0 0 15.6M 0 --:--:-- --:--:-- --:--:-- 15.6M ~/nightlyrpmqXhzaw/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmqXhzaw/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmqXhzaw/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmqXhzaw ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmqXhzaw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmqXhzaw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1b1cd94bdc964421a22180c3bd14a9a8 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.yfstpn2w:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5950690912175732558.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 949e025b +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 110 | n46.pufty | 172.19.3.110 | pufty | 3918 | Deployed | 949e025b | None | None | 7 | x86_64 | 1 | 2450 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 19 00:40:53 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 19 Aug 2019 00:40:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #264 In-Reply-To: <100250500.3875.1566088861800.JavaMail.jenkins@jenkins.ci.centos.org> References: <100250500.3875.1566088861800.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1611933977.3927.1566175253819.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.03 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 19 August 2019 01:40:09 +0100 (0:00:00.307) 0:03:02.537 ********* TASK [container-engine/docker : check length of search domains] **************** Monday 19 August 2019 01:40:10 +0100 (0:00:00.309) 0:03:02.846 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Monday 19 August 2019 01:40:10 +0100 (0:00:00.301) 0:03:03.148 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 19 August 2019 01:40:10 +0100 (0:00:00.302) 0:03:03.450 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 19 August 2019 01:40:11 +0100 (0:00:00.670) 0:03:04.121 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 19 August 2019 01:40:12 +0100 (0:00:01.341) 0:03:05.462 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 19 August 2019 01:40:12 +0100 (0:00:00.271) 0:03:05.734 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 19 August 2019 01:40:13 +0100 (0:00:00.264) 0:03:05.999 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 19 August 2019 01:40:13 +0100 (0:00:00.316) 0:03:06.315 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 19 August 2019 01:40:13 +0100 (0:00:00.316) 0:03:06.631 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 19 August 2019 01:40:14 +0100 (0:00:00.291) 0:03:06.923 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 19 August 2019 01:40:14 +0100 (0:00:00.304) 0:03:07.228 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 19 August 2019 01:40:14 +0100 (0:00:00.351) 0:03:07.580 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 19 August 2019 01:40:15 +0100 (0:00:00.321) 0:03:07.901 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 19 August 2019 01:40:15 +0100 (0:00:00.375) 0:03:08.277 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 19 August 2019 01:40:15 +0100 (0:00:00.362) 0:03:08.640 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 19 August 2019 01:40:16 +0100 (0:00:00.278) 0:03:08.919 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 19 August 2019 01:40:16 +0100 (0:00:00.290) 0:03:09.209 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 19 August 2019 01:40:16 +0100 (0:00:00.294) 0:03:09.504 ********* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 19 August 2019 01:40:18 +0100 (0:00:01.999) 0:03:11.503 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 19 August 2019 01:40:19 +0100 (0:00:01.185) 0:03:12.688 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 19 August 2019 01:40:20 +0100 (0:00:00.311) 0:03:13.000 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 19 August 2019 01:40:21 +0100 (0:00:01.048) 0:03:14.049 ********* TASK [container-engine/docker : get systemd version] *************************** Monday 19 August 2019 01:40:21 +0100 (0:00:00.402) 0:03:14.451 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 19 August 2019 01:40:21 +0100 (0:00:00.315) 0:03:14.767 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 19 August 2019 01:40:22 +0100 (0:00:00.315) 0:03:15.083 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 19 August 2019 01:40:24 +0100 (0:00:02.250) 0:03:17.333 ********* changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 19 August 2019 01:40:26 +0100 (0:00:02.291) 0:03:19.624 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 19 August 2019 01:40:27 +0100 (0:00:00.403) 0:03:20.028 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 19 August 2019 01:40:27 +0100 (0:00:00.275) 0:03:20.303 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 19 August 2019 01:40:29 +0100 (0:00:01.989) 0:03:22.293 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 19 August 2019 01:40:30 +0100 (0:00:01.133) 0:03:23.427 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 19 August 2019 01:40:30 +0100 (0:00:00.346) 0:03:23.773 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 19 August 2019 01:40:35 +0100 (0:00:04.172) 0:03:27.947 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 19 August 2019 01:40:45 +0100 (0:00:10.202) 0:03:38.149 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 19 August 2019 01:40:46 +0100 (0:00:01.271) 0:03:39.421 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 19 August 2019 01:40:47 +0100 (0:00:01.176) 0:03:40.598 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 19 August 2019 01:40:48 +0100 (0:00:00.536) 0:03:41.134 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 19 August 2019 01:40:49 +0100 (0:00:01.050) 0:03:42.185 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 19 August 2019 01:40:50 +0100 (0:00:00.985) 0:03:43.171 ********* TASK [download : Download items] *********************************************** Monday 19 August 2019 01:40:50 +0100 (0:00:00.160) 0:03:43.331 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 19 August 2019 01:40:53 +0100 (0:00:02.825) 0:03:46.156 ********* =============================================================================== Install packages ------------------------------------------------------- 36.06s Wait for host to be available ------------------------------------------ 21.44s gather facts from all instances ---------------------------------------- 17.43s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 5.99s container-engine/docker : Docker | reload docker ------------------------ 4.17s kubernetes/preinstall : Create kubernetes directories ------------------- 4.14s download : Download items ----------------------------------------------- 2.83s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.74s kubernetes/preinstall : Create cni directories -------------------------- 2.68s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.66s Load required kernel modules -------------------------------------------- 2.61s Extend root VG ---------------------------------------------------------- 2.31s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.31s container-engine/docker : Write docker dns systemd drop-in -------------- 2.29s container-engine/docker : Write docker options systemd drop-in ---------- 2.25s download : Sync container ----------------------------------------------- 2.13s download : Download items ----------------------------------------------- 2.08s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s Gathering Facts --------------------------------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 19 01:23:56 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 19 Aug 2019 01:23:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #289 In-Reply-To: <1013582278.3879.1566091156862.JavaMail.jenkins@jenkins.ci.centos.org> References: <1013582278.3879.1566091156862.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1165338122.3936.1566177836236.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 57.23 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 20 00:16:21 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 20 Aug 2019 00:16:21 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #461 In-Reply-To: <1872885795.3926.1566173801590.JavaMail.jenkins@jenkins.ci.centos.org> References: <1872885795.3926.1566173801590.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <968527053.4032.1566260181372.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.67 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1644 0 --:--:-- --:--:-- --:--:-- 1653 100 8513k 100 8513k 0 0 9667k 0 --:--:-- --:--:-- --:--:-- 9667k Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2087 0 --:--:-- --:--:-- --:--:-- 2090 100 38.3M 100 38.3M 0 0 47.9M 0 --:--:-- --:--:-- --:--:-- 47.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 534 0 --:--:-- --:--:-- --:--:-- 536 0 0 0 620 0 0 1573 0 --:--:-- --:--:-- --:--:-- 1573 100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.5M ~/nightlyrpmdLfsse/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmdLfsse/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmdLfsse/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmdLfsse ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmdLfsse/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmdLfsse/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M cf324a96ca4b440fb4704cbbb54660da -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jj32dd3k:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins523253653160339876.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c23edb65 +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 128 | n1.crusty | 172.19.2.1 | crusty | 3923 | Deployed | c23edb65 | None | None | 7 | x86_64 | 1 | 2000 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 20 00:40:52 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 20 Aug 2019 00:40:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #265 In-Reply-To: <1611933977.3927.1566175253819.JavaMail.jenkins@jenkins.ci.centos.org> References: <1611933977.3927.1566175253819.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1686578635.4033.1566261652937.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.02 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 20 August 2019 01:40:09 +0100 (0:00:00.302) 0:03:01.923 ******** TASK [container-engine/docker : check length of search domains] **************** Tuesday 20 August 2019 01:40:09 +0100 (0:00:00.310) 0:03:02.233 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 20 August 2019 01:40:09 +0100 (0:00:00.303) 0:03:02.537 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 20 August 2019 01:40:10 +0100 (0:00:00.292) 0:03:02.829 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 20 August 2019 01:40:10 +0100 (0:00:00.609) 0:03:03.438 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 20 August 2019 01:40:12 +0100 (0:00:01.373) 0:03:04.812 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 20 August 2019 01:40:12 +0100 (0:00:00.267) 0:03:05.080 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 20 August 2019 01:40:12 +0100 (0:00:00.261) 0:03:05.341 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 20 August 2019 01:40:12 +0100 (0:00:00.313) 0:03:05.655 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 20 August 2019 01:40:13 +0100 (0:00:00.312) 0:03:05.967 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 20 August 2019 01:40:13 +0100 (0:00:00.282) 0:03:06.249 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 20 August 2019 01:40:13 +0100 (0:00:00.282) 0:03:06.532 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 20 August 2019 01:40:14 +0100 (0:00:00.293) 0:03:06.825 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 20 August 2019 01:40:14 +0100 (0:00:00.281) 0:03:07.106 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 20 August 2019 01:40:14 +0100 (0:00:00.376) 0:03:07.483 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 20 August 2019 01:40:15 +0100 (0:00:00.371) 0:03:07.854 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 20 August 2019 01:40:15 +0100 (0:00:00.281) 0:03:08.135 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 20 August 2019 01:40:15 +0100 (0:00:00.351) 0:03:08.487 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 20 August 2019 01:40:16 +0100 (0:00:00.305) 0:03:08.793 ******** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 20 August 2019 01:40:18 +0100 (0:00:02.225) 0:03:11.018 ******** ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 20 August 2019 01:40:19 +0100 (0:00:01.099) 0:03:12.118 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 20 August 2019 01:40:19 +0100 (0:00:00.351) 0:03:12.470 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 20 August 2019 01:40:20 +0100 (0:00:01.034) 0:03:13.505 ******** TASK [container-engine/docker : get systemd version] *************************** Tuesday 20 August 2019 01:40:21 +0100 (0:00:00.305) 0:03:13.810 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 20 August 2019 01:40:21 +0100 (0:00:00.312) 0:03:14.122 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 20 August 2019 01:40:21 +0100 (0:00:00.301) 0:03:14.424 ******** changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 20 August 2019 01:40:23 +0100 (0:00:02.201) 0:03:16.626 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 20 August 2019 01:40:26 +0100 (0:00:02.293) 0:03:18.920 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 20 August 2019 01:40:26 +0100 (0:00:00.314) 0:03:19.234 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 20 August 2019 01:40:26 +0100 (0:00:00.284) 0:03:19.519 ******** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 20 August 2019 01:40:28 +0100 (0:00:01.995) 0:03:21.514 ******** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 20 August 2019 01:40:29 +0100 (0:00:01.095) 0:03:22.609 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 20 August 2019 01:40:30 +0100 (0:00:00.310) 0:03:22.920 ******** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 20 August 2019 01:40:34 +0100 (0:00:04.102) 0:03:27.023 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 20 August 2019 01:40:44 +0100 (0:00:10.281) 0:03:37.305 ******** changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 20 August 2019 01:40:45 +0100 (0:00:01.183) 0:03:38.488 ******** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 20 August 2019 01:40:47 +0100 (0:00:01.280) 0:03:39.768 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 20 August 2019 01:40:47 +0100 (0:00:00.536) 0:03:40.305 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 20 August 2019 01:40:48 +0100 (0:00:01.045) 0:03:41.351 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 20 August 2019 01:40:49 +0100 (0:00:01.006) 0:03:42.357 ******** TASK [download : Download items] *********************************************** Tuesday 20 August 2019 01:40:49 +0100 (0:00:00.107) 0:03:42.465 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 20 August 2019 01:40:52 +0100 (0:00:02.787) 0:03:45.252 ******** =============================================================================== Install packages ------------------------------------------------------- 36.08s Wait for host to be available ------------------------------------------ 21.51s gather facts from all instances ---------------------------------------- 16.59s container-engine/docker : Docker | pause while Docker restarts --------- 10.28s Persist loaded modules -------------------------------------------------- 5.94s kubernetes/preinstall : Create kubernetes directories ------------------- 4.12s container-engine/docker : Docker | reload docker ------------------------ 4.10s download : Download items ----------------------------------------------- 2.79s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.60s Load required kernel modules -------------------------------------------- 2.59s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.58s kubernetes/preinstall : Create cni directories -------------------------- 2.49s Extend root VG ---------------------------------------------------------- 2.33s container-engine/docker : Write docker dns systemd drop-in -------------- 2.29s container-engine/docker : ensure service is started if docker packages are already present --- 2.23s container-engine/docker : Write docker options systemd drop-in ---------- 2.20s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.17s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.16s Gathering Facts --------------------------------------------------------- 2.11s download : Sync container ----------------------------------------------- 2.08s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 20 01:10:01 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 20 Aug 2019 01:10:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #290 In-Reply-To: <1165338122.3936.1566177836236.JavaMail.jenkins@jenkins.ci.centos.org> References: <1165338122.3936.1566177836236.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <611809129.4038.1566263401559.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.08 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 21 00:16:39 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 21 Aug 2019 00:16:39 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #462 In-Reply-To: <968527053.4032.1566260181372.JavaMail.jenkins@jenkins.ci.centos.org> References: <968527053.4032.1566260181372.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1505727381.4142.1566346599097.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.40 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1847 0 --:--:-- --:--:-- --:--:-- 1850 100 8513k 100 8513k 0 0 13.0M 0 --:--:-- --:--:-- --:--:-- 13.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1916 0 --:--:-- --:--:-- --:--:-- 1923 4 38.3M 4 1930k 0 0 3477k 0 0:00:11 --:--:-- 0:00:11 3477k100 38.3M 100 38.3M 0 0 32.6M 0 0:00:01 0:00:01 --:--:-- 58.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 533 0 --:--:-- --:--:-- --:--:-- 534 0 0 0 620 0 0 1510 0 --:--:-- --:--:-- --:--:-- 1510 100 10.7M 100 10.7M 0 0 14.9M 0 --:--:-- --:--:-- --:--:-- 14.9M ~/nightlyrpm4h1Ns6/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm4h1Ns6/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm4h1Ns6/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm4h1Ns6 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm4h1Ns6/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm4h1Ns6/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d85cb9906b544b70b1cbd999f3242c6f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.twd7rrb1:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2923051420486635736.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 8b17b185 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 142 | n15.crusty | 172.19.2.15 | crusty | 3928 | Deployed | 8b17b185 | None | None | 7 | x86_64 | 1 | 2140 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 21 00:41:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 21 Aug 2019 00:41:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #266 In-Reply-To: <1686578635.4033.1566261652937.JavaMail.jenkins@jenkins.ci.centos.org> References: <1686578635.4033.1566261652937.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <400970440.4143.1566348064029.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.08 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 21 August 2019 01:40:20 +0100 (0:00:00.295) 0:03:10.593 ****** TASK [container-engine/docker : check length of search domains] **************** Wednesday 21 August 2019 01:40:20 +0100 (0:00:00.297) 0:03:10.890 ****** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 21 August 2019 01:40:20 +0100 (0:00:00.303) 0:03:11.194 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 21 August 2019 01:40:21 +0100 (0:00:00.297) 0:03:11.492 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 21 August 2019 01:40:21 +0100 (0:00:00.615) 0:03:12.108 ****** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 21 August 2019 01:40:23 +0100 (0:00:01.370) 0:03:13.478 ****** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 21 August 2019 01:40:23 +0100 (0:00:00.262) 0:03:13.741 ****** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 21 August 2019 01:40:23 +0100 (0:00:00.260) 0:03:14.001 ****** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 21 August 2019 01:40:23 +0100 (0:00:00.314) 0:03:14.316 ****** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 21 August 2019 01:40:24 +0100 (0:00:00.318) 0:03:14.634 ****** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 21 August 2019 01:40:24 +0100 (0:00:00.279) 0:03:14.914 ****** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 21 August 2019 01:40:24 +0100 (0:00:00.300) 0:03:15.214 ****** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 21 August 2019 01:40:25 +0100 (0:00:00.298) 0:03:15.513 ****** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 21 August 2019 01:40:25 +0100 (0:00:00.294) 0:03:15.807 ****** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 21 August 2019 01:40:25 +0100 (0:00:00.354) 0:03:16.162 ****** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 21 August 2019 01:40:26 +0100 (0:00:00.331) 0:03:16.493 ****** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 21 August 2019 01:40:26 +0100 (0:00:00.277) 0:03:16.771 ****** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 21 August 2019 01:40:26 +0100 (0:00:00.286) 0:03:17.058 ****** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 21 August 2019 01:40:26 +0100 (0:00:00.294) 0:03:17.353 ****** ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 21 August 2019 01:40:28 +0100 (0:00:01.986) 0:03:19.340 ****** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 21 August 2019 01:40:30 +0100 (0:00:01.053) 0:03:20.393 ****** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 21 August 2019 01:40:30 +0100 (0:00:00.332) 0:03:20.726 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 21 August 2019 01:40:31 +0100 (0:00:01.002) 0:03:21.729 ****** TASK [container-engine/docker : get systemd version] *************************** Wednesday 21 August 2019 01:40:31 +0100 (0:00:00.352) 0:03:22.082 ****** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 21 August 2019 01:40:32 +0100 (0:00:00.312) 0:03:22.394 ****** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 21 August 2019 01:40:32 +0100 (0:00:00.316) 0:03:22.710 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 21 August 2019 01:40:34 +0100 (0:00:02.540) 0:03:25.251 ****** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 21 August 2019 01:40:37 +0100 (0:00:02.259) 0:03:27.511 ****** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 21 August 2019 01:40:37 +0100 (0:00:00.313) 0:03:27.824 ****** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 21 August 2019 01:40:37 +0100 (0:00:00.241) 0:03:28.065 ****** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 21 August 2019 01:40:39 +0100 (0:00:01.988) 0:03:30.054 ****** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 21 August 2019 01:40:40 +0100 (0:00:01.175) 0:03:31.230 ****** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 21 August 2019 01:40:41 +0100 (0:00:00.287) 0:03:31.517 ****** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 21 August 2019 01:40:45 +0100 (0:00:04.246) 0:03:35.763 ****** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 21 August 2019 01:40:55 +0100 (0:00:10.228) 0:03:45.992 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 21 August 2019 01:40:56 +0100 (0:00:01.145) 0:03:47.138 ****** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 21 August 2019 01:40:58 +0100 (0:00:01.264) 0:03:48.402 ****** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 21 August 2019 01:40:58 +0100 (0:00:00.531) 0:03:48.933 ****** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 21 August 2019 01:40:59 +0100 (0:00:01.144) 0:03:50.078 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 21 August 2019 01:41:00 +0100 (0:00:00.973) 0:03:51.052 ****** TASK [download : Download items] *********************************************** Wednesday 21 August 2019 01:41:00 +0100 (0:00:00.118) 0:03:51.170 ****** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 21 August 2019 01:41:03 +0100 (0:00:02.832) 0:03:54.003 ****** =============================================================================== Install packages ------------------------------------------------------- 35.53s Wait for host to be available ------------------------------------------ 31.99s gather facts from all instances ---------------------------------------- 16.44s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 6.24s container-engine/docker : Docker | reload docker ------------------------ 4.25s kubernetes/preinstall : Create kubernetes directories ------------------- 4.00s download : Download items ----------------------------------------------- 2.83s Load required kernel modules -------------------------------------------- 2.65s container-engine/docker : Write docker options systemd drop-in ---------- 2.54s kubernetes/preinstall : Create cni directories -------------------------- 2.52s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.41s Extend root VG ---------------------------------------------------------- 2.39s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.33s container-engine/docker : Write docker dns systemd drop-in -------------- 2.26s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.15s download : Sync container ----------------------------------------------- 2.08s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.04s download : Download items ----------------------------------------------- 2.01s Gathering Facts --------------------------------------------------------- 1.99s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 21 01:24:10 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 21 Aug 2019 01:24:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #291 In-Reply-To: <611809129.4038.1566263401559.JavaMail.jenkins@jenkins.ci.centos.org> References: <611809129.4038.1566263401559.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <387702205.4149.1566350650808.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.12 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 22 00:14:33 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 22 Aug 2019 00:14:33 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #463 In-Reply-To: <1505727381.4142.1566346599097.JavaMail.jenkins@jenkins.ci.centos.org> References: <1505727381.4142.1566346599097.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1108932877.4208.1566432873178.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2356 0 --:--:-- --:--:-- --:--:-- 2363 51 8513k 51 4402k 0 0 7691k 0 0:00:01 --:--:-- 0:00:01 7691k100 8513k 100 8513k 0 0 13.1M 0 --:--:-- --:--:-- --:--:-- 65.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2830 0 --:--:-- --:--:-- --:--:-- 2837 100 38.3M 100 38.3M 0 0 41.3M 0 --:--:-- --:--:-- --:--:-- 41.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1000 0 --:--:-- --:--:-- --:--:-- 1000 0 0 0 620 0 0 2205 0 --:--:-- --:--:-- --:--:-- 2205 100 10.7M 100 10.7M 0 0 19.7M 0 --:--:-- --:--:-- --:--:-- 19.7M ~/nightlyrpmRL3LMP/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmRL3LMP/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmRL3LMP/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmRL3LMP ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmRL3LMP/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmRL3LMP/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 33 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b913fddc9eab47f684ac64143f3738ec -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.xs62rjr6:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8934366748045183263.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b4e94fb5 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 211 | n20.dusty | 172.19.2.84 | dusty | 3931 | Deployed | b4e94fb5 | None | None | 7 | x86_64 | 1 | 2190 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 22 00:42:28 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 22 Aug 2019 00:42:28 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #267 In-Reply-To: <400970440.4143.1566348064029.JavaMail.jenkins@jenkins.ci.centos.org> References: <400970440.4143.1566348064029.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <212310573.4212.1566434548925.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.03 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 22 August 2019 01:41:44 +0100 (0:00:00.301) 0:03:00.254 ******* TASK [container-engine/docker : check length of search domains] **************** Thursday 22 August 2019 01:41:44 +0100 (0:00:00.297) 0:03:00.552 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 22 August 2019 01:41:45 +0100 (0:00:00.304) 0:03:00.856 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 22 August 2019 01:41:45 +0100 (0:00:00.287) 0:03:01.144 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 22 August 2019 01:41:46 +0100 (0:00:00.610) 0:03:01.754 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 22 August 2019 01:41:47 +0100 (0:00:01.410) 0:03:03.165 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 22 August 2019 01:41:47 +0100 (0:00:00.273) 0:03:03.439 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 22 August 2019 01:41:48 +0100 (0:00:00.268) 0:03:03.707 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 22 August 2019 01:41:48 +0100 (0:00:00.311) 0:03:04.019 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 22 August 2019 01:41:48 +0100 (0:00:00.312) 0:03:04.331 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 22 August 2019 01:41:48 +0100 (0:00:00.285) 0:03:04.617 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 22 August 2019 01:41:49 +0100 (0:00:00.286) 0:03:04.904 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 22 August 2019 01:41:49 +0100 (0:00:00.297) 0:03:05.202 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 22 August 2019 01:41:49 +0100 (0:00:00.299) 0:03:05.501 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 22 August 2019 01:41:50 +0100 (0:00:00.368) 0:03:05.870 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 22 August 2019 01:41:50 +0100 (0:00:00.354) 0:03:06.225 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 22 August 2019 01:41:50 +0100 (0:00:00.300) 0:03:06.525 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 22 August 2019 01:41:51 +0100 (0:00:00.275) 0:03:06.801 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 22 August 2019 01:41:51 +0100 (0:00:00.295) 0:03:07.096 ******* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 22 August 2019 01:41:53 +0100 (0:00:02.132) 0:03:09.228 ******* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 22 August 2019 01:41:54 +0100 (0:00:01.262) 0:03:10.491 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 22 August 2019 01:41:55 +0100 (0:00:00.392) 0:03:10.884 ******* changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 22 August 2019 01:41:56 +0100 (0:00:01.068) 0:03:11.952 ******* TASK [container-engine/docker : get systemd version] *************************** Thursday 22 August 2019 01:41:56 +0100 (0:00:00.336) 0:03:12.288 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 22 August 2019 01:41:56 +0100 (0:00:00.313) 0:03:12.602 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 22 August 2019 01:41:57 +0100 (0:00:00.309) 0:03:12.911 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 22 August 2019 01:41:59 +0100 (0:00:02.313) 0:03:15.225 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 22 August 2019 01:42:01 +0100 (0:00:02.358) 0:03:17.583 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 22 August 2019 01:42:02 +0100 (0:00:00.334) 0:03:17.918 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 22 August 2019 01:42:02 +0100 (0:00:00.246) 0:03:18.165 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 22 August 2019 01:42:04 +0100 (0:00:01.992) 0:03:20.158 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 22 August 2019 01:42:05 +0100 (0:00:01.221) 0:03:21.379 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 22 August 2019 01:42:05 +0100 (0:00:00.286) 0:03:21.665 ******* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 22 August 2019 01:42:10 +0100 (0:00:04.088) 0:03:25.754 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 22 August 2019 01:42:20 +0100 (0:00:10.255) 0:03:36.009 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 22 August 2019 01:42:21 +0100 (0:00:01.221) 0:03:37.230 ******* ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 22 August 2019 01:42:22 +0100 (0:00:01.342) 0:03:38.573 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 22 August 2019 01:42:23 +0100 (0:00:00.641) 0:03:39.214 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 22 August 2019 01:42:24 +0100 (0:00:01.018) 0:03:40.233 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 22 August 2019 01:42:25 +0100 (0:00:01.008) 0:03:41.241 ******* TASK [download : Download items] *********************************************** Thursday 22 August 2019 01:42:25 +0100 (0:00:00.127) 0:03:41.369 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 22 August 2019 01:42:28 +0100 (0:00:02.808) 0:03:44.177 ******* =============================================================================== Install packages ------------------------------------------------------- 35.14s Wait for host to be available ------------------------------------------ 21.53s gather facts from all instances ---------------------------------------- 16.49s container-engine/docker : Docker | pause while Docker restarts --------- 10.26s Persist loaded modules -------------------------------------------------- 6.14s kubernetes/preinstall : Create kubernetes directories ------------------- 4.20s container-engine/docker : Docker | reload docker ------------------------ 4.09s download : Download items ----------------------------------------------- 2.81s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.67s kubernetes/preinstall : Create cni directories -------------------------- 2.57s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.55s Load required kernel modules -------------------------------------------- 2.54s container-engine/docker : Write docker dns systemd drop-in -------------- 2.36s Extend root VG ---------------------------------------------------------- 2.32s container-engine/docker : Write docker options systemd drop-in ---------- 2.31s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.29s Gathering Facts --------------------------------------------------------- 2.22s download : Download items ----------------------------------------------- 2.17s container-engine/docker : ensure service is started if docker packages are already present --- 2.13s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.06s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 22 01:24:11 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 22 Aug 2019 01:24:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #292 In-Reply-To: <387702205.4149.1566350650808.JavaMail.jenkins@jenkins.ci.centos.org> References: <387702205.4149.1566350650808.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <390229520.4217.1566437051109.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.12 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 23 00:16:10 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 23 Aug 2019 00:16:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #464 In-Reply-To: <1108932877.4208.1566432873178.JavaMail.jenkins@jenkins.ci.centos.org> References: <1108932877.4208.1566432873178.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1393444144.4311.1566519370763.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.69 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2041 0 --:--:-- --:--:-- --:--:-- 2043 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 12.4M 0 --:--:-- --:--:-- --:--:-- 36.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2080 0 --:--:-- --:--:-- --:--:-- 2090 59 38.3M 59 22.9M 0 0 31.2M 0 0:00:01 --:--:-- 0:00:01 31.2M100 38.3M 100 38.3M 0 0 41.8M 0 --:--:-- --:--:-- --:--:-- 85.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 563 0 --:--:-- --:--:-- --:--:-- 564 0 0 0 620 0 0 1708 0 --:--:-- --:--:-- --:--:-- 1708 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 17.2M 0 --:--:-- --:--:-- --:--:-- 47.9M ~/nightlyrpmkhY5zo/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmkhY5zo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmkhY5zo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmkhY5zo ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmkhY5zo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmkhY5zo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 78b3117907ec43378550ba45f6021aa9 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.wwmqjycv:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3254714308326162584.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 7091e603 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 185 | n58.crusty | 172.19.2.58 | crusty | 3936 | Deployed | 7091e603 | None | None | 7 | x86_64 | 1 | 2570 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 23 00:37:10 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 23 Aug 2019 00:37:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #268 In-Reply-To: <212310573.4212.1566434548925.JavaMail.jenkins@jenkins.ci.centos.org> References: <212310573.4212.1566434548925.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <826788857.4313.1566520630877.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.93 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 23 August 2019 01:36:44 +0100 (0:00:00.132) 0:02:02.428 ********* TASK [container-engine/docker : check length of search domains] **************** Friday 23 August 2019 01:36:44 +0100 (0:00:00.132) 0:02:02.561 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Friday 23 August 2019 01:36:44 +0100 (0:00:00.133) 0:02:02.695 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 23 August 2019 01:36:44 +0100 (0:00:00.127) 0:02:02.822 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 23 August 2019 01:36:45 +0100 (0:00:00.247) 0:02:03.070 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 23 August 2019 01:36:45 +0100 (0:00:00.631) 0:02:03.702 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 23 August 2019 01:36:45 +0100 (0:00:00.120) 0:02:03.822 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 23 August 2019 01:36:45 +0100 (0:00:00.119) 0:02:03.942 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 23 August 2019 01:36:46 +0100 (0:00:00.163) 0:02:04.106 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 23 August 2019 01:36:46 +0100 (0:00:00.141) 0:02:04.248 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 23 August 2019 01:36:46 +0100 (0:00:00.132) 0:02:04.380 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 23 August 2019 01:36:46 +0100 (0:00:00.129) 0:02:04.509 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 23 August 2019 01:36:46 +0100 (0:00:00.125) 0:02:04.634 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 23 August 2019 01:36:46 +0100 (0:00:00.128) 0:02:04.763 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 23 August 2019 01:36:46 +0100 (0:00:00.161) 0:02:04.925 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 23 August 2019 01:36:47 +0100 (0:00:00.158) 0:02:05.083 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 23 August 2019 01:36:47 +0100 (0:00:00.135) 0:02:05.219 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 23 August 2019 01:36:47 +0100 (0:00:00.139) 0:02:05.359 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 23 August 2019 01:36:47 +0100 (0:00:00.141) 0:02:05.501 ********* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 23 August 2019 01:36:48 +0100 (0:00:00.895) 0:02:06.396 ********* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 23 August 2019 01:36:48 +0100 (0:00:00.527) 0:02:06.924 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 23 August 2019 01:36:49 +0100 (0:00:00.130) 0:02:07.055 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 23 August 2019 01:36:49 +0100 (0:00:00.454) 0:02:07.509 ********* TASK [container-engine/docker : get systemd version] *************************** Friday 23 August 2019 01:36:49 +0100 (0:00:00.132) 0:02:07.642 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 23 August 2019 01:36:49 +0100 (0:00:00.144) 0:02:07.787 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 23 August 2019 01:36:49 +0100 (0:00:00.135) 0:02:07.922 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 23 August 2019 01:36:50 +0100 (0:00:01.002) 0:02:08.924 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 23 August 2019 01:36:51 +0100 (0:00:01.005) 0:02:09.930 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 23 August 2019 01:36:52 +0100 (0:00:00.139) 0:02:10.070 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 23 August 2019 01:36:52 +0100 (0:00:00.109) 0:02:10.179 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 23 August 2019 01:36:53 +0100 (0:00:00.864) 0:02:11.044 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 23 August 2019 01:36:53 +0100 (0:00:00.554) 0:02:11.599 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 23 August 2019 01:36:53 +0100 (0:00:00.128) 0:02:11.728 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 23 August 2019 01:36:56 +0100 (0:00:03.022) 0:02:14.751 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 23 August 2019 01:37:06 +0100 (0:00:10.094) 0:02:24.845 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 23 August 2019 01:37:07 +0100 (0:00:00.535) 0:02:25.381 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 23 August 2019 01:37:08 +0100 (0:00:00.604) 0:02:25.985 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 23 August 2019 01:37:08 +0100 (0:00:00.218) 0:02:26.204 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 23 August 2019 01:37:08 +0100 (0:00:00.513) 0:02:26.717 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 23 August 2019 01:37:09 +0100 (0:00:00.435) 0:02:27.153 ********* TASK [download : Download items] *********************************************** Friday 23 August 2019 01:37:09 +0100 (0:00:00.059) 0:02:27.213 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 23 August 2019 01:37:10 +0100 (0:00:01.410) 0:02:28.623 ********* =============================================================================== Install packages ------------------------------------------------------- 27.12s Wait for host to be available ------------------------------------------ 16.24s Extend root VG --------------------------------------------------------- 15.79s gather facts from all instances ---------------------------------------- 10.85s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s Persist loaded modules -------------------------------------------------- 3.31s container-engine/docker : Docker | reload docker ------------------------ 3.02s kubernetes/preinstall : Create kubernetes directories ------------------- 1.90s Load required kernel modules -------------------------------------------- 1.65s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.53s Extend the root LV and FS to occupy remaining space --------------------- 1.49s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.45s download : Download items ----------------------------------------------- 1.41s download : Download items ----------------------------------------------- 1.26s kubernetes/preinstall : Create cni directories -------------------------- 1.24s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.22s Gathering Facts --------------------------------------------------------- 1.19s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.09s download : Sync container ----------------------------------------------- 1.07s bootstrap-os : check if atomic host ------------------------------------- 1.07s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 23 01:23:59 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 23 Aug 2019 01:23:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #293 In-Reply-To: <390229520.4217.1566437051109.JavaMail.jenkins@jenkins.ci.centos.org> References: <390229520.4217.1566437051109.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <532940872.4319.1566523439337.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.45 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 24 00:16:42 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 24 Aug 2019 00:16:42 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #465 In-Reply-To: <1393444144.4311.1566519370763.JavaMail.jenkins@jenkins.ci.centos.org> References: <1393444144.4311.1566519370763.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2119342915.4381.1566605802856.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.05 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1814 0 --:--:-- --:--:-- --:--:-- 1816 100 8513k 100 8513k 0 0 10.6M 0 --:--:-- --:--:-- --:--:-- 10.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1990 0 --:--:-- --:--:-- --:--:-- 1996 0 38.3M 0 101k 0 0 221k 0 0:02:57 --:--:-- 0:02:57 221k100 38.3M 100 38.3M 0 0 44.4M 0 --:--:-- --:--:-- --:--:-- 94.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 560 0 --:--:-- --:--:-- --:--:-- 560 0 0 0 620 0 0 1696 0 --:--:-- --:--:-- --:--:-- 1696 100 10.7M 100 10.7M 0 0 17.3M 0 --:--:-- --:--:-- --:--:-- 17.3M ~/nightlyrpmDzhMC2/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmDzhMC2/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmDzhMC2/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmDzhMC2 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmDzhMC2/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmDzhMC2/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8a94b27dcb0e4c969b296d234623e9ab -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ee__5jgt:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins140315004517406884.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 10bc009e +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 180 | n53.crusty | 172.19.2.53 | crusty | 3940 | Deployed | 10bc009e | None | None | 7 | x86_64 | 1 | 2520 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 24 00:40:57 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 24 Aug 2019 00:40:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #269 In-Reply-To: <826788857.4313.1566520630877.JavaMail.jenkins@jenkins.ci.centos.org> References: <826788857.4313.1566520630877.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1208299325.4382.1566607257608.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.08 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 24 August 2019 01:40:13 +0100 (0:00:00.304) 0:03:09.083 ******* TASK [container-engine/docker : check length of search domains] **************** Saturday 24 August 2019 01:40:13 +0100 (0:00:00.304) 0:03:09.387 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 24 August 2019 01:40:13 +0100 (0:00:00.304) 0:03:09.692 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 24 August 2019 01:40:14 +0100 (0:00:00.304) 0:03:09.996 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 24 August 2019 01:40:14 +0100 (0:00:00.661) 0:03:10.657 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 24 August 2019 01:40:16 +0100 (0:00:01.399) 0:03:12.056 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 24 August 2019 01:40:16 +0100 (0:00:00.289) 0:03:12.346 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 24 August 2019 01:40:16 +0100 (0:00:00.268) 0:03:12.614 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 24 August 2019 01:40:17 +0100 (0:00:00.330) 0:03:12.945 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 24 August 2019 01:40:17 +0100 (0:00:00.355) 0:03:13.301 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 24 August 2019 01:40:17 +0100 (0:00:00.324) 0:03:13.625 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 24 August 2019 01:40:18 +0100 (0:00:00.387) 0:03:14.013 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 24 August 2019 01:40:18 +0100 (0:00:00.349) 0:03:14.363 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 24 August 2019 01:40:18 +0100 (0:00:00.307) 0:03:14.671 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 24 August 2019 01:40:19 +0100 (0:00:00.415) 0:03:15.086 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 24 August 2019 01:40:19 +0100 (0:00:00.391) 0:03:15.477 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 24 August 2019 01:40:19 +0100 (0:00:00.323) 0:03:15.800 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 24 August 2019 01:40:20 +0100 (0:00:00.287) 0:03:16.088 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 24 August 2019 01:40:20 +0100 (0:00:00.293) 0:03:16.382 ******* ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 24 August 2019 01:40:22 +0100 (0:00:02.009) 0:03:18.391 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 24 August 2019 01:40:23 +0100 (0:00:01.082) 0:03:19.474 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 24 August 2019 01:40:23 +0100 (0:00:00.329) 0:03:19.803 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 24 August 2019 01:40:24 +0100 (0:00:01.037) 0:03:20.841 ******* TASK [container-engine/docker : get systemd version] *************************** Saturday 24 August 2019 01:40:25 +0100 (0:00:00.329) 0:03:21.170 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 24 August 2019 01:40:25 +0100 (0:00:00.309) 0:03:21.480 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 24 August 2019 01:40:25 +0100 (0:00:00.315) 0:03:21.796 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 24 August 2019 01:40:28 +0100 (0:00:02.304) 0:03:24.101 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 24 August 2019 01:40:30 +0100 (0:00:02.188) 0:03:26.289 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 24 August 2019 01:40:30 +0100 (0:00:00.326) 0:03:26.616 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 24 August 2019 01:40:30 +0100 (0:00:00.241) 0:03:26.857 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 24 August 2019 01:40:32 +0100 (0:00:02.022) 0:03:28.880 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 24 August 2019 01:40:34 +0100 (0:00:01.155) 0:03:30.035 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 24 August 2019 01:40:34 +0100 (0:00:00.314) 0:03:30.350 ******* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 24 August 2019 01:40:38 +0100 (0:00:04.134) 0:03:34.484 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 24 August 2019 01:40:48 +0100 (0:00:10.216) 0:03:44.702 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 24 August 2019 01:40:50 +0100 (0:00:01.306) 0:03:46.008 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 24 August 2019 01:40:51 +0100 (0:00:01.448) 0:03:47.456 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 24 August 2019 01:40:52 +0100 (0:00:00.533) 0:03:47.990 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 24 August 2019 01:40:53 +0100 (0:00:01.003) 0:03:48.993 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 24 August 2019 01:40:54 +0100 (0:00:01.138) 0:03:50.131 ******* TASK [download : Download items] *********************************************** Saturday 24 August 2019 01:40:54 +0100 (0:00:00.146) 0:03:50.278 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 24 August 2019 01:40:57 +0100 (0:00:02.736) 0:03:53.015 ******* =============================================================================== Install packages ------------------------------------------------------- 35.87s Wait for host to be available ------------------------------------------ 32.02s gather facts from all instances ---------------------------------------- 14.50s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 5.84s container-engine/docker : Docker | reload docker ------------------------ 4.13s kubernetes/preinstall : Create kubernetes directories ------------------- 4.04s download : Download items ----------------------------------------------- 2.74s Load required kernel modules -------------------------------------------- 2.61s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.45s Extend root VG ---------------------------------------------------------- 2.41s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.36s kubernetes/preinstall : Create cni directories -------------------------- 2.32s container-engine/docker : Write docker options systemd drop-in ---------- 2.30s container-engine/docker : Write docker dns systemd drop-in -------------- 2.19s kubernetes/preinstall : Set selinux policy ------------------------------ 2.12s Gathering Facts --------------------------------------------------------- 2.08s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.07s container-engine/docker : restart docker -------------------------------- 2.02s container-engine/docker : ensure service is started if docker packages are already present --- 2.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 24 01:24:29 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 24 Aug 2019 01:24:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #294 In-Reply-To: <532940872.4319.1566523439337.JavaMail.jenkins@jenkins.ci.centos.org> References: <532940872.4319.1566523439337.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <979381554.4388.1566609869563.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.08 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 25 00:16:52 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 25 Aug 2019 00:16:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #466 In-Reply-To: <2119342915.4381.1566605802856.JavaMail.jenkins@jenkins.ci.centos.org> References: <2119342915.4381.1566605802856.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <310772996.4444.1566692212286.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.05 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1841 0 --:--:-- --:--:-- --:--:-- 1844 100 8513k 100 8513k 0 0 11.9M 0 --:--:-- --:--:-- --:--:-- 11.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2193 0 --:--:-- --:--:-- --:--:-- 2200 0 38.3M 0 101k 0 0 242k 0 0:02:42 --:--:-- 0:02:42 242k100 38.3M 100 38.3M 0 0 40.3M 0 --:--:-- --:--:-- --:--:-- 71.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 630 0 --:--:-- --:--:-- --:--:-- 629 0 0 0 620 0 0 1933 0 --:--:-- --:--:-- --:--:-- 1933 100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 15.4M ~/nightlyrpm34906W/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm34906W/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm34906W/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm34906W ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm34906W/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm34906W/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 681159e46d1c4b85a26a952f0523bd5a -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.06o26pj9:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7451627378309935725.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 62d17ab2 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 96 | n32.pufty | 172.19.3.96 | pufty | 3943 | Deployed | 62d17ab2 | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 25 00:40:57 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 25 Aug 2019 00:40:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #270 In-Reply-To: <1208299325.4382.1566607257608.JavaMail.jenkins@jenkins.ci.centos.org> References: <1208299325.4382.1566607257608.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <939447952.4445.1566693657863.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 288.96 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 25 August 2019 01:40:14 +0100 (0:00:00.348) 0:03:04.783 ********* TASK [container-engine/docker : check length of search domains] **************** Sunday 25 August 2019 01:40:14 +0100 (0:00:00.354) 0:03:05.138 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 25 August 2019 01:40:15 +0100 (0:00:00.305) 0:03:05.443 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 25 August 2019 01:40:15 +0100 (0:00:00.294) 0:03:05.737 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 25 August 2019 01:40:15 +0100 (0:00:00.572) 0:03:06.310 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 25 August 2019 01:40:17 +0100 (0:00:01.376) 0:03:07.686 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 25 August 2019 01:40:17 +0100 (0:00:00.265) 0:03:07.952 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 25 August 2019 01:40:17 +0100 (0:00:00.259) 0:03:08.211 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 25 August 2019 01:40:18 +0100 (0:00:00.316) 0:03:08.528 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 25 August 2019 01:40:18 +0100 (0:00:00.309) 0:03:08.837 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 25 August 2019 01:40:18 +0100 (0:00:00.291) 0:03:09.129 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 25 August 2019 01:40:18 +0100 (0:00:00.285) 0:03:09.415 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 25 August 2019 01:40:19 +0100 (0:00:00.301) 0:03:09.716 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 25 August 2019 01:40:19 +0100 (0:00:00.285) 0:03:10.002 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 25 August 2019 01:40:19 +0100 (0:00:00.365) 0:03:10.367 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 25 August 2019 01:40:20 +0100 (0:00:00.341) 0:03:10.709 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 25 August 2019 01:40:20 +0100 (0:00:00.292) 0:03:11.002 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 25 August 2019 01:40:20 +0100 (0:00:00.330) 0:03:11.332 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 25 August 2019 01:40:21 +0100 (0:00:00.293) 0:03:11.625 ********* ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 25 August 2019 01:40:23 +0100 (0:00:01.887) 0:03:13.513 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 25 August 2019 01:40:24 +0100 (0:00:01.050) 0:03:14.564 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 25 August 2019 01:40:24 +0100 (0:00:00.291) 0:03:14.855 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 25 August 2019 01:40:25 +0100 (0:00:01.021) 0:03:15.877 ********* TASK [container-engine/docker : get systemd version] *************************** Sunday 25 August 2019 01:40:25 +0100 (0:00:00.310) 0:03:16.187 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 25 August 2019 01:40:26 +0100 (0:00:00.365) 0:03:16.553 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 25 August 2019 01:40:26 +0100 (0:00:00.363) 0:03:16.917 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 25 August 2019 01:40:28 +0100 (0:00:02.286) 0:03:19.204 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 25 August 2019 01:40:31 +0100 (0:00:02.295) 0:03:21.499 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 25 August 2019 01:40:31 +0100 (0:00:00.312) 0:03:21.811 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 25 August 2019 01:40:31 +0100 (0:00:00.240) 0:03:22.052 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 25 August 2019 01:40:33 +0100 (0:00:01.867) 0:03:23.919 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 25 August 2019 01:40:34 +0100 (0:00:01.154) 0:03:25.074 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 25 August 2019 01:40:35 +0100 (0:00:00.369) 0:03:25.444 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 25 August 2019 01:40:39 +0100 (0:00:04.082) 0:03:29.526 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 25 August 2019 01:40:49 +0100 (0:00:10.223) 0:03:39.749 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 25 August 2019 01:40:50 +0100 (0:00:01.369) 0:03:41.118 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 25 August 2019 01:40:51 +0100 (0:00:01.236) 0:03:42.355 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 25 August 2019 01:40:52 +0100 (0:00:00.540) 0:03:42.895 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 25 August 2019 01:40:53 +0100 (0:00:01.030) 0:03:43.926 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 25 August 2019 01:40:54 +0100 (0:00:01.024) 0:03:44.950 ********* TASK [download : Download items] *********************************************** Sunday 25 August 2019 01:40:54 +0100 (0:00:00.149) 0:03:45.100 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 25 August 2019 01:40:57 +0100 (0:00:02.779) 0:03:47.879 ********* =============================================================================== Install packages ------------------------------------------------------- 35.32s Wait for host to be available ------------------------------------------ 23.92s gather facts from all instances ---------------------------------------- 17.40s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.15s kubernetes/preinstall : Create kubernetes directories ------------------- 4.12s container-engine/docker : Docker | reload docker ------------------------ 4.08s download : Download items ----------------------------------------------- 2.78s Load required kernel modules -------------------------------------------- 2.73s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.52s Extend root VG ---------------------------------------------------------- 2.45s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.45s kubernetes/preinstall : Create cni directories -------------------------- 2.44s container-engine/docker : Write docker dns systemd drop-in -------------- 2.30s container-engine/docker : Write docker options systemd drop-in ---------- 2.29s download : Sync container ----------------------------------------------- 2.23s download : Download items ----------------------------------------------- 2.15s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.13s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s Gathering Facts --------------------------------------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Aug 25 01:19:19 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 25 Aug 2019 01:19:19 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #295 In-Reply-To: <979381554.4388.1566609869563.JavaMail.jenkins@jenkins.ci.centos.org> References: <979381554.4388.1566609869563.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1109188637.4459.1566695959475.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 61.12 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 26 00:16:36 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 26 Aug 2019 00:16:36 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #467 In-Reply-To: <310772996.4444.1566692212286.JavaMail.jenkins@jenkins.ci.centos.org> References: <310772996.4444.1566692212286.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <742053103.4501.1566778596094.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.69 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1934 0 --:--:-- --:--:-- --:--:-- 1939 100 8513k 100 8513k 0 0 12.5M 0 --:--:-- --:--:-- --:--:-- 12.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2057 0 --:--:-- --:--:-- --:--:-- 2055 100 38.3M 100 38.3M 0 0 45.5M 0 --:--:-- --:--:-- --:--:-- 45.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 594 0 --:--:-- --:--:-- --:--:-- 595 0 0 0 620 0 0 1741 0 --:--:-- --:--:-- --:--:-- 1741 100 10.7M 100 10.7M 0 0 16.2M 0 --:--:-- --:--:-- --:--:-- 16.2M ~/nightlyrpmHVtn4R/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmHVtn4R/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmHVtn4R/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmHVtn4R ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmHVtn4R/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmHVtn4R/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 2517017517f6482fa906df486456b881 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ginb8fgw:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5964476587243715893.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d61bf6b3 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 186 | n59.crusty | 172.19.2.59 | crusty | 3947 | Deployed | d61bf6b3 | None | None | 7 | x86_64 | 1 | 2580 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 26 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 26 Aug 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #271 In-Reply-To: <939447952.4445.1566693657863.JavaMail.jenkins@jenkins.ci.centos.org> References: <939447952.4445.1566693657863.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <668172693.4503.1566780054299.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.04 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 26 August 2019 01:40:11 +0100 (0:00:00.301) 0:03:00.556 ********* TASK [container-engine/docker : check length of search domains] **************** Monday 26 August 2019 01:40:11 +0100 (0:00:00.301) 0:03:00.857 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Monday 26 August 2019 01:40:11 +0100 (0:00:00.299) 0:03:01.157 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 26 August 2019 01:40:12 +0100 (0:00:00.353) 0:03:01.510 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 26 August 2019 01:40:12 +0100 (0:00:00.614) 0:03:02.125 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 26 August 2019 01:40:14 +0100 (0:00:01.320) 0:03:03.446 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 26 August 2019 01:40:14 +0100 (0:00:00.299) 0:03:03.746 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 26 August 2019 01:40:14 +0100 (0:00:00.259) 0:03:04.006 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 26 August 2019 01:40:15 +0100 (0:00:00.329) 0:03:04.335 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 26 August 2019 01:40:15 +0100 (0:00:00.322) 0:03:04.658 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 26 August 2019 01:40:15 +0100 (0:00:00.274) 0:03:04.933 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 26 August 2019 01:40:15 +0100 (0:00:00.284) 0:03:05.218 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 26 August 2019 01:40:16 +0100 (0:00:00.331) 0:03:05.550 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 26 August 2019 01:40:16 +0100 (0:00:00.304) 0:03:05.854 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 26 August 2019 01:40:16 +0100 (0:00:00.348) 0:03:06.203 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 26 August 2019 01:40:17 +0100 (0:00:00.358) 0:03:06.562 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 26 August 2019 01:40:17 +0100 (0:00:00.273) 0:03:06.836 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 26 August 2019 01:40:17 +0100 (0:00:00.281) 0:03:07.117 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 26 August 2019 01:40:18 +0100 (0:00:00.347) 0:03:07.465 ********* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 26 August 2019 01:40:20 +0100 (0:00:01.989) 0:03:09.454 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 26 August 2019 01:40:21 +0100 (0:00:01.100) 0:03:10.555 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 26 August 2019 01:40:21 +0100 (0:00:00.303) 0:03:10.858 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 26 August 2019 01:40:22 +0100 (0:00:01.039) 0:03:11.898 ********* TASK [container-engine/docker : get systemd version] *************************** Monday 26 August 2019 01:40:22 +0100 (0:00:00.326) 0:03:12.225 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 26 August 2019 01:40:23 +0100 (0:00:00.309) 0:03:12.535 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 26 August 2019 01:40:23 +0100 (0:00:00.323) 0:03:12.858 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 26 August 2019 01:40:25 +0100 (0:00:02.229) 0:03:15.088 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 26 August 2019 01:40:27 +0100 (0:00:02.121) 0:03:17.210 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 26 August 2019 01:40:28 +0100 (0:00:00.339) 0:03:17.549 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 26 August 2019 01:40:28 +0100 (0:00:00.265) 0:03:17.814 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 26 August 2019 01:40:30 +0100 (0:00:01.957) 0:03:19.771 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 26 August 2019 01:40:31 +0100 (0:00:01.098) 0:03:20.870 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 26 August 2019 01:40:31 +0100 (0:00:00.290) 0:03:21.161 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 26 August 2019 01:40:35 +0100 (0:00:03.896) 0:03:25.058 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 26 August 2019 01:40:45 +0100 (0:00:10.189) 0:03:35.248 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 26 August 2019 01:40:47 +0100 (0:00:01.263) 0:03:36.512 ********* ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 26 August 2019 01:40:48 +0100 (0:00:01.313) 0:03:37.825 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 26 August 2019 01:40:49 +0100 (0:00:00.531) 0:03:38.357 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 26 August 2019 01:40:50 +0100 (0:00:01.031) 0:03:39.388 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 26 August 2019 01:40:51 +0100 (0:00:00.919) 0:03:40.307 ********* TASK [download : Download items] *********************************************** Monday 26 August 2019 01:40:51 +0100 (0:00:00.122) 0:03:40.430 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 26 August 2019 01:40:53 +0100 (0:00:02.824) 0:03:43.254 ********* =============================================================================== Install packages ------------------------------------------------------- 35.21s Wait for host to be available ------------------------------------------ 21.65s gather facts from all instances ---------------------------------------- 17.24s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.13s kubernetes/preinstall : Create kubernetes directories ------------------- 4.03s container-engine/docker : Docker | reload docker ------------------------ 3.90s download : Download items ----------------------------------------------- 2.82s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.65s Load required kernel modules -------------------------------------------- 2.58s kubernetes/preinstall : Create cni directories -------------------------- 2.50s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.39s Extend root VG ---------------------------------------------------------- 2.38s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.24s container-engine/docker : Write docker options systemd drop-in ---------- 2.23s Gathering Facts --------------------------------------------------------- 2.20s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.15s container-engine/docker : Write docker dns systemd drop-in -------------- 2.12s download : Sync container ----------------------------------------------- 2.08s download : Download items ----------------------------------------------- 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Aug 26 01:23:42 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 26 Aug 2019 01:23:42 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #296 In-Reply-To: <1109188637.4459.1566695959475.JavaMail.jenkins@jenkins.ci.centos.org> References: <1109188637.4459.1566695959475.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1722175196.4508.1566782622806.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 60.77 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 27 00:15:03 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 27 Aug 2019 00:15:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #468 In-Reply-To: <742053103.4501.1566778596094.JavaMail.jenkins@jenkins.ci.centos.org> References: <742053103.4501.1566778596094.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <167787243.4669.1566864903441.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.24 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2622 0 --:--:-- --:--:-- --:--:-- 2641 100 8513k 100 8513k 0 0 10.2M 0 --:--:-- --:--:-- --:--:-- 10.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2899 0 --:--:-- --:--:-- --:--:-- 2902 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 27 38.3M 27 10.6M 0 0 8595k 0 0:00:04 0:00:01 0:00:03 11.5M 56 38.3M 56 21.8M 0 0 9831k 0 0:00:03 0:00:02 0:00:01 11.3M 92 38.3M 92 35.3M 0 0 10.8M 0 0:00:03 0:00:03 --:--:-- 12.0M100 38.3M 100 38.3M 0 0 11.0M 0 0:00:03 0:00:03 --:--:-- 12.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 767 0 --:--:-- --:--:-- --:--:-- 768 0 0 0 620 0 0 2218 0 --:--:-- --:--:-- --:--:-- 2218 100 10.7M 100 10.7M 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M ~/nightlyrpmPc22Gi/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmPc22Gi/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmPc22Gi/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmPc22Gi ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmPc22Gi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmPc22Gi/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 2721fff33bb943dabc42b85adb011ac8 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.10vcyk4p:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2409140780876767333.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done feb2bb48 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 230 | n39.dusty | 172.19.2.103 | dusty | 3951 | Deployed | feb2bb48 | None | None | 7 | x86_64 | 1 | 2380 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 27 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 27 Aug 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #272 In-Reply-To: <668172693.4503.1566780054299.JavaMail.jenkins@jenkins.ci.centos.org> References: <668172693.4503.1566780054299.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <761143753.4670.1566866454642.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.10 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 27 August 2019 01:40:10 +0100 (0:00:00.301) 0:03:01.111 ******** TASK [container-engine/docker : check length of search domains] **************** Tuesday 27 August 2019 01:40:10 +0100 (0:00:00.299) 0:03:01.410 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 27 August 2019 01:40:11 +0100 (0:00:00.305) 0:03:01.716 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 27 August 2019 01:40:11 +0100 (0:00:00.302) 0:03:02.019 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 27 August 2019 01:40:12 +0100 (0:00:00.591) 0:03:02.611 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 27 August 2019 01:40:13 +0100 (0:00:01.353) 0:03:03.965 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 27 August 2019 01:40:13 +0100 (0:00:00.267) 0:03:04.232 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 27 August 2019 01:40:13 +0100 (0:00:00.263) 0:03:04.495 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 27 August 2019 01:40:14 +0100 (0:00:00.316) 0:03:04.812 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 27 August 2019 01:40:14 +0100 (0:00:00.323) 0:03:05.135 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 27 August 2019 01:40:14 +0100 (0:00:00.280) 0:03:05.415 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 27 August 2019 01:40:15 +0100 (0:00:00.294) 0:03:05.710 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 27 August 2019 01:40:15 +0100 (0:00:00.346) 0:03:06.056 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 27 August 2019 01:40:15 +0100 (0:00:00.320) 0:03:06.378 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 27 August 2019 01:40:16 +0100 (0:00:00.376) 0:03:06.754 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 27 August 2019 01:40:16 +0100 (0:00:00.411) 0:03:07.166 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 27 August 2019 01:40:16 +0100 (0:00:00.319) 0:03:07.485 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 27 August 2019 01:40:17 +0100 (0:00:00.294) 0:03:07.780 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 27 August 2019 01:40:17 +0100 (0:00:00.293) 0:03:08.074 ******** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 27 August 2019 01:40:19 +0100 (0:00:02.104) 0:03:10.178 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 27 August 2019 01:40:20 +0100 (0:00:01.084) 0:03:11.262 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 27 August 2019 01:40:21 +0100 (0:00:00.298) 0:03:11.561 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 27 August 2019 01:40:22 +0100 (0:00:01.051) 0:03:12.613 ******** TASK [container-engine/docker : get systemd version] *************************** Tuesday 27 August 2019 01:40:22 +0100 (0:00:00.336) 0:03:12.949 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 27 August 2019 01:40:22 +0100 (0:00:00.322) 0:03:13.271 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 27 August 2019 01:40:23 +0100 (0:00:00.327) 0:03:13.598 ******** changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 27 August 2019 01:40:25 +0100 (0:00:02.211) 0:03:15.810 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 27 August 2019 01:40:27 +0100 (0:00:02.231) 0:03:18.041 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 27 August 2019 01:40:27 +0100 (0:00:00.341) 0:03:18.383 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 27 August 2019 01:40:28 +0100 (0:00:00.243) 0:03:18.626 ******** changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 27 August 2019 01:40:30 +0100 (0:00:02.078) 0:03:20.704 ******** changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 27 August 2019 01:40:31 +0100 (0:00:01.153) 0:03:21.858 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 27 August 2019 01:40:31 +0100 (0:00:00.300) 0:03:22.158 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 27 August 2019 01:40:35 +0100 (0:00:04.149) 0:03:26.308 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 27 August 2019 01:40:45 +0100 (0:00:10.205) 0:03:36.513 ******** changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 27 August 2019 01:40:47 +0100 (0:00:01.335) 0:03:37.849 ******** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 27 August 2019 01:40:48 +0100 (0:00:01.327) 0:03:39.177 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 27 August 2019 01:40:49 +0100 (0:00:00.536) 0:03:39.713 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 27 August 2019 01:40:50 +0100 (0:00:01.092) 0:03:40.805 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 27 August 2019 01:40:51 +0100 (0:00:00.995) 0:03:41.801 ******** TASK [download : Download items] *********************************************** Tuesday 27 August 2019 01:40:51 +0100 (0:00:00.134) 0:03:41.935 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 27 August 2019 01:40:54 +0100 (0:00:02.764) 0:03:44.700 ******** =============================================================================== Install packages ------------------------------------------------------- 34.40s Wait for host to be available ------------------------------------------ 21.55s gather facts from all instances ---------------------------------------- 16.85s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.47s container-engine/docker : Docker | reload docker ------------------------ 4.15s kubernetes/preinstall : Create kubernetes directories ------------------- 4.12s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.82s download : Download items ----------------------------------------------- 2.76s kubernetes/preinstall : Create cni directories -------------------------- 2.73s Load required kernel modules -------------------------------------------- 2.58s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.55s Extend root VG ---------------------------------------------------------- 2.54s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.51s Gathering Facts --------------------------------------------------------- 2.26s container-engine/docker : Write docker dns systemd drop-in -------------- 2.23s container-engine/docker : Write docker options systemd drop-in ---------- 2.21s container-engine/docker : ensure service is started if docker packages are already present --- 2.10s container-engine/docker : restart docker -------------------------------- 2.08s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Aug 27 01:18:29 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 27 Aug 2019 01:18:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #297 In-Reply-To: <1722175196.4508.1566782622806.JavaMail.jenkins@jenkins.ci.centos.org> References: <1722175196.4508.1566782622806.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <147040582.4674.1566868709930.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 60.72 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 28 00:17:05 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 28 Aug 2019 00:17:05 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #469 In-Reply-To: <167787243.4669.1566864903441.JavaMail.jenkins@jenkins.ci.centos.org> References: <167787243.4669.1566864903441.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <782428068.35.1566951425455.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.01 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1827 0 --:--:-- --:--:-- --:--:-- 1838 67 8513k 67 5780k 0 0 9997k 0 --:--:-- --:--:-- --:--:-- 9997k100 8513k 100 8513k 0 0 13.8M 0 --:--:-- --:--:-- --:--:-- 116M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2144 0 --:--:-- --:--:-- --:--:-- 2147 1 38.3M 1 781k 0 0 810k 0 0:00:48 --:--:-- 0:00:48 810k 25 38.3M 25 9.8M 0 0 5138k 0 0:00:07 0:00:01 0:00:06 9328k 59 38.3M 59 22.8M 0 0 7917k 0 0:00:04 0:00:02 0:00:02 11.0M100 38.3M 100 38.3M 0 0 10.2M 0 0:00:03 0:00:03 --:--:-- 13.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 576 0 --:--:-- --:--:-- --:--:-- 579 0 0 0 620 0 0 1842 0 --:--:-- --:--:-- --:--:-- 1842 100 10.7M 100 10.7M 0 0 14.3M 0 --:--:-- --:--:-- --:--:-- 14.3M ~/nightlyrpmi09KJN/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmi09KJN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmi09KJN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmi09KJN ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmi09KJN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmi09KJN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 28 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 47e7b02bba01417d851e320520cab638 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.lryn1ryi:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_GB.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4970963365093786420.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 289c065c +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 289 | n34.gusty | 172.19.2.162 | gusty | 3956 | Deployed | 289c065c | None | None | 7 | x86_64 | 1 | 2330 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 28 00:41:02 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 28 Aug 2019 00:41:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #273 In-Reply-To: <761143753.4670.1566866454642.JavaMail.jenkins@jenkins.ci.centos.org> References: <761143753.4670.1566866454642.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1981608247.37.1566952862575.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.00 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 28 August 2019 01:40:19 +0100 (0:00:00.306) 0:03:04.059 ****** TASK [container-engine/docker : check length of search domains] **************** Wednesday 28 August 2019 01:40:19 +0100 (0:00:00.304) 0:03:04.364 ****** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 28 August 2019 01:40:19 +0100 (0:00:00.303) 0:03:04.668 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 28 August 2019 01:40:20 +0100 (0:00:00.294) 0:03:04.962 ****** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 28 August 2019 01:40:20 +0100 (0:00:00.668) 0:03:05.631 ****** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 28 August 2019 01:40:22 +0100 (0:00:01.335) 0:03:06.966 ****** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 28 August 2019 01:40:22 +0100 (0:00:00.259) 0:03:07.226 ****** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 28 August 2019 01:40:22 +0100 (0:00:00.268) 0:03:07.494 ****** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 28 August 2019 01:40:22 +0100 (0:00:00.313) 0:03:07.808 ****** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 28 August 2019 01:40:23 +0100 (0:00:00.315) 0:03:08.124 ****** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 28 August 2019 01:40:23 +0100 (0:00:00.288) 0:03:08.413 ****** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 28 August 2019 01:40:23 +0100 (0:00:00.298) 0:03:08.711 ****** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 28 August 2019 01:40:24 +0100 (0:00:00.297) 0:03:09.009 ****** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 28 August 2019 01:40:24 +0100 (0:00:00.289) 0:03:09.298 ****** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 28 August 2019 01:40:24 +0100 (0:00:00.373) 0:03:09.672 ****** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 28 August 2019 01:40:25 +0100 (0:00:00.333) 0:03:10.005 ****** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 28 August 2019 01:40:25 +0100 (0:00:00.280) 0:03:10.286 ****** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 28 August 2019 01:40:25 +0100 (0:00:00.281) 0:03:10.568 ****** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 28 August 2019 01:40:25 +0100 (0:00:00.293) 0:03:10.861 ****** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 28 August 2019 01:40:27 +0100 (0:00:01.983) 0:03:12.845 ****** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 28 August 2019 01:40:29 +0100 (0:00:01.094) 0:03:13.939 ****** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 28 August 2019 01:40:29 +0100 (0:00:00.328) 0:03:14.268 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 28 August 2019 01:40:30 +0100 (0:00:01.000) 0:03:15.269 ****** TASK [container-engine/docker : get systemd version] *************************** Wednesday 28 August 2019 01:40:30 +0100 (0:00:00.350) 0:03:15.619 ****** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 28 August 2019 01:40:31 +0100 (0:00:00.313) 0:03:15.933 ****** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 28 August 2019 01:40:31 +0100 (0:00:00.313) 0:03:16.247 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 28 August 2019 01:40:33 +0100 (0:00:02.293) 0:03:18.540 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 28 August 2019 01:40:35 +0100 (0:00:02.100) 0:03:20.641 ****** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 28 August 2019 01:40:36 +0100 (0:00:00.327) 0:03:20.968 ****** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 28 August 2019 01:40:36 +0100 (0:00:00.242) 0:03:21.211 ****** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 28 August 2019 01:40:38 +0100 (0:00:01.867) 0:03:23.078 ****** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 28 August 2019 01:40:39 +0100 (0:00:01.200) 0:03:24.278 ****** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 28 August 2019 01:40:39 +0100 (0:00:00.348) 0:03:24.627 ****** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 28 August 2019 01:40:43 +0100 (0:00:04.206) 0:03:28.833 ****** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 28 August 2019 01:40:54 +0100 (0:00:10.273) 0:03:39.107 ****** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 28 August 2019 01:40:55 +0100 (0:00:01.277) 0:03:40.385 ****** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 28 August 2019 01:40:56 +0100 (0:00:01.216) 0:03:41.601 ****** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 28 August 2019 01:40:57 +0100 (0:00:00.534) 0:03:42.136 ****** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 28 August 2019 01:40:58 +0100 (0:00:01.031) 0:03:43.168 ****** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 28 August 2019 01:40:59 +0100 (0:00:00.967) 0:03:44.136 ****** TASK [download : Download items] *********************************************** Wednesday 28 August 2019 01:40:59 +0100 (0:00:00.108) 0:03:44.245 ****** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 28 August 2019 01:41:02 +0100 (0:00:02.744) 0:03:46.989 ****** =============================================================================== Install packages ------------------------------------------------------- 34.53s Wait for host to be available ------------------------------------------ 23.94s gather facts from all instances ---------------------------------------- 17.89s container-engine/docker : Docker | pause while Docker restarts --------- 10.27s Persist loaded modules -------------------------------------------------- 6.24s container-engine/docker : Docker | reload docker ------------------------ 4.21s kubernetes/preinstall : Create kubernetes directories ------------------- 3.76s download : Download items ----------------------------------------------- 2.74s Load required kernel modules -------------------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.53s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.53s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.51s Extend root VG ---------------------------------------------------------- 2.44s container-engine/docker : Write docker options systemd drop-in ---------- 2.29s download : Sync container ----------------------------------------------- 2.21s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.18s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.15s download : Download items ----------------------------------------------- 2.14s container-engine/docker : Write docker dns systemd drop-in -------------- 2.10s Gathering Facts --------------------------------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Aug 28 00:50:26 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 28 Aug 2019 00:50:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10969 - Failure! (release-6 on CentOS-7/x86_64) Message-ID: <1399633341.40.1566953426942.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10969 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10969/ to view the results. From ci at centos.org Wed Aug 28 00:50:26 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 28 Aug 2019 00:50:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10970 - Still Failing! (release-6 on CentOS-6/x86_64) Message-ID: <1450968042.41.1566953427069.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10970 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10970/ to view the results. From ci at centos.org Wed Aug 28 00:51:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 28 Aug 2019 00:51:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #298 In-Reply-To: <147040582.4674.1566868709930.JavaMail.jenkins@jenkins.ci.centos.org> References: <147040582.4674.1566868709930.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <931875490.42.1566953464003.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave01 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 1fe330d34a62b56438f6eb86538286962b1abd90 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 1fe330d34a62b56438f6eb86538286962b1abd90 Commit message: "Merge pull request #73 from kshithijiyer/Adding_task_to_install_crefi" > git rev-list --no-walk 1fe330d34a62b56438f6eb86538286962b1abd90 # timeout=10 [gluster_ansible-infra] $ /bin/sh -xe /tmp/jenkins3156732329477840058.sh + set +x string indices must be integers Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 29 00:17:18 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 29 Aug 2019 00:17:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #470 In-Reply-To: <782428068.35.1566951425455.JavaMail.jenkins@jenkins.ci.centos.org> References: <782428068.35.1566951425455.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1703700681.210.1567037838626.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 39.01 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1659 0 --:--:-- --:--:-- --:--:-- 1666 100 8513k 100 8513k 0 0 14.7M 0 --:--:-- --:--:-- --:--:-- 14.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1996 0 --:--:-- --:--:-- --:--:-- 1996 98 38.3M 98 37.8M 0 0 32.6M 0 0:00:01 0:00:01 --:--:-- 32.6M100 38.3M 100 38.3M 0 0 32.9M 0 0:00:01 0:00:01 --:--:-- 106M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 517 0 --:--:-- --:--:-- --:--:-- 518 0 0 0 620 0 0 1643 0 --:--:-- --:--:-- --:--:-- 1643 14 10.7M 14 1580k 0 0 2699k 0 0:00:04 --:--:-- 0:00:04 2699k100 10.7M 100 10.7M 0 0 15.8M 0 --:--:-- --:--:-- --:--:-- 99.8M ~/nightlyrpmvnO5t1/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmvnO5t1/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmvnO5t1/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmvnO5t1 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmvnO5t1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmvnO5t1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3ee40190d0ae4621803036c52f1c5a3a -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.bjq3zw13:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_GB.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5775475964549461539.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d8c4be07 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 190 | n63.crusty | 172.19.2.63 | crusty | 3961 | Deployed | d8c4be07 | None | None | 7 | x86_64 | 1 | 2620 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 29 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 29 Aug 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #274 In-Reply-To: <1981608247.37.1566952862575.JavaMail.jenkins@jenkins.ci.centos.org> References: <1981608247.37.1566952862575.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <878598344.212.1567039254147.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 289.01 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 29 August 2019 01:40:11 +0100 (0:00:00.296) 0:03:00.937 ******* TASK [container-engine/docker : check length of search domains] **************** Thursday 29 August 2019 01:40:11 +0100 (0:00:00.299) 0:03:01.236 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 29 August 2019 01:40:11 +0100 (0:00:00.305) 0:03:01.542 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 29 August 2019 01:40:11 +0100 (0:00:00.291) 0:03:01.833 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 29 August 2019 01:40:12 +0100 (0:00:00.633) 0:03:02.467 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 29 August 2019 01:40:13 +0100 (0:00:01.318) 0:03:03.785 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 29 August 2019 01:40:14 +0100 (0:00:00.272) 0:03:04.057 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 29 August 2019 01:40:14 +0100 (0:00:00.255) 0:03:04.313 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 29 August 2019 01:40:14 +0100 (0:00:00.313) 0:03:04.627 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 29 August 2019 01:40:15 +0100 (0:00:00.306) 0:03:04.933 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 29 August 2019 01:40:15 +0100 (0:00:00.279) 0:03:05.212 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 29 August 2019 01:40:15 +0100 (0:00:00.283) 0:03:05.496 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 29 August 2019 01:40:15 +0100 (0:00:00.279) 0:03:05.775 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 29 August 2019 01:40:16 +0100 (0:00:00.280) 0:03:06.056 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 29 August 2019 01:40:16 +0100 (0:00:00.423) 0:03:06.479 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 29 August 2019 01:40:17 +0100 (0:00:00.371) 0:03:06.851 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 29 August 2019 01:40:17 +0100 (0:00:00.285) 0:03:07.136 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 29 August 2019 01:40:17 +0100 (0:00:00.275) 0:03:07.412 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 29 August 2019 01:40:17 +0100 (0:00:00.299) 0:03:07.711 ******* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 29 August 2019 01:40:19 +0100 (0:00:01.921) 0:03:09.632 ******* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 29 August 2019 01:40:20 +0100 (0:00:01.038) 0:03:10.670 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 29 August 2019 01:40:21 +0100 (0:00:00.294) 0:03:10.965 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 29 August 2019 01:40:22 +0100 (0:00:01.287) 0:03:12.252 ******* TASK [container-engine/docker : get systemd version] *************************** Thursday 29 August 2019 01:40:22 +0100 (0:00:00.391) 0:03:12.644 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 29 August 2019 01:40:23 +0100 (0:00:00.307) 0:03:12.952 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 29 August 2019 01:40:23 +0100 (0:00:00.308) 0:03:13.261 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 29 August 2019 01:40:25 +0100 (0:00:02.144) 0:03:15.405 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 29 August 2019 01:40:27 +0100 (0:00:02.175) 0:03:17.580 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 29 August 2019 01:40:28 +0100 (0:00:00.336) 0:03:17.917 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 29 August 2019 01:40:28 +0100 (0:00:00.237) 0:03:18.155 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 29 August 2019 01:40:30 +0100 (0:00:01.906) 0:03:20.061 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 29 August 2019 01:40:31 +0100 (0:00:01.057) 0:03:21.119 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 29 August 2019 01:40:31 +0100 (0:00:00.283) 0:03:21.403 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 29 August 2019 01:40:35 +0100 (0:00:04.079) 0:03:25.483 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 29 August 2019 01:40:45 +0100 (0:00:10.206) 0:03:35.689 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 29 August 2019 01:40:47 +0100 (0:00:01.216) 0:03:36.905 ******* ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 29 August 2019 01:40:48 +0100 (0:00:01.199) 0:03:38.105 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 29 August 2019 01:40:48 +0100 (0:00:00.514) 0:03:38.620 ******* ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 29 August 2019 01:40:49 +0100 (0:00:01.099) 0:03:39.719 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 29 August 2019 01:40:50 +0100 (0:00:01.037) 0:03:40.757 ******* TASK [download : Download items] *********************************************** Thursday 29 August 2019 01:40:51 +0100 (0:00:00.127) 0:03:40.885 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 29 August 2019 01:40:53 +0100 (0:00:02.749) 0:03:43.634 ******* =============================================================================== Install packages ------------------------------------------------------- 35.67s Wait for host to be available ------------------------------------------ 21.28s gather facts from all instances ---------------------------------------- 16.78s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.25s kubernetes/preinstall : Create kubernetes directories ------------------- 4.18s container-engine/docker : Docker | reload docker ------------------------ 4.08s download : Download items ----------------------------------------------- 2.75s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.72s Load required kernel modules -------------------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.53s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.49s Extend root VG ---------------------------------------------------------- 2.39s Gathering Facts --------------------------------------------------------- 2.26s download : Sync container ----------------------------------------------- 2.21s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.18s container-engine/docker : Write docker dns systemd drop-in -------------- 2.18s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.17s container-engine/docker : Write docker options systemd drop-in ---------- 2.14s download : Download items ----------------------------------------------- 2.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Aug 29 00:50:28 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 29 Aug 2019 00:50:28 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10975 - Failure! (release-5 on CentOS-6/x86_64) Message-ID: <1582007620.216.1567039829306.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10975 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10975/ to view the results. From ci at centos.org Thu Aug 29 00:50:28 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 29 Aug 2019 00:50:28 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10976 - Still Failing! (release-5 on CentOS-7/x86_64) Message-ID: <1326982868.215.1567039829000.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10976 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10976/ to view the results. From ci at centos.org Thu Aug 29 00:51:04 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 29 Aug 2019 00:51:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #299 In-Reply-To: <931875490.42.1566953464003.JavaMail.jenkins@jenkins.ci.centos.org> References: <931875490.42.1566953464003.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1598537902.217.1567039864170.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave01 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 1fe330d34a62b56438f6eb86538286962b1abd90 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 1fe330d34a62b56438f6eb86538286962b1abd90 Commit message: "Merge pull request #73 from kshithijiyer/Adding_task_to_install_crefi" > git rev-list --no-walk 1fe330d34a62b56438f6eb86538286962b1abd90 # timeout=10 [gluster_ansible-infra] $ /bin/sh -xe /tmp/jenkins8953707433121637100.sh + set +x string indices must be integers Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 30 00:16:37 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 30 Aug 2019 00:16:37 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #471 In-Reply-To: <1703700681.210.1567037838626.JavaMail.jenkins@jenkins.ci.centos.org> References: <1703700681.210.1567037838626.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <540634741.320.1567124197525.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Fixing lookup plugin issues in setup-glusto.yml [kshithij.ki] Reverting back to absolute path for glusto.pub ------------------------------------------ [...truncated 39.02 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : distribution-gpg-keys-1.32-1.el7.noarch 24/52 Installing : mock-core-configs-30.4-1.el7.noarch 25/52 Installing : usermode-1.111-5.el7.x86_64 26/52 Installing : pakchois-0.4-10.el7.x86_64 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : kernel-headers-3.10.0-957.27.2.el7.x86_64 37/52 Installing : glibc-headers-2.17-260.el7_6.6.x86_64 38/52 Installing : glibc-devel-2.17-260.el7_6.6.x86_64 39/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : glibc-headers-2.17-260.el7_6.6.x86_64 4/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : kernel-headers-3.10.0-957.27.2.el7.x86_64 11/52 Verifying : nettle-2.7.1-8.el7.x86_64 12/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 13/52 Verifying : golang-src-1.11.5-1.el7.noarch 14/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 15/52 Verifying : pigz-2.3.4-1.el7.x86_64 16/52 Verifying : perl-srpm-macros-1-8.el7.noarch 17/52 Verifying : golang-1.11.5-1.el7.x86_64 18/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : neon-0.30.0-3.el7.x86_64 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : mock-core-configs-30.4-1.el7.noarch 38/52 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 39/52 Verifying : glibc-devel-2.17-260.el7_6.6.x86_64 40/52 Verifying : bzip2-1.0.6-13.el7.x86_64 41/52 Verifying : subversion-1.7.14-14.el7.x86_64 42/52 Verifying : python36-distro-1.2.0-3.el7.noarch 43/52 Verifying : dwz-0.11-3.el7.x86_64 44/52 Verifying : unzip-6.0-19.el7.x86_64 45/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 46/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/52 Verifying : python36-requests-2.12.5-3.el7.noarch 48/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 49/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 50/52 Verifying : elfutils-0.172-2.el7.x86_64 51/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.6 glibc-headers.x86_64 0:2.17-260.el7_6.6 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.27.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1955 0 --:--:-- --:--:-- --:--:-- 1964 1 8513k 1 134k 0 0 255k 0 0:00:33 --:--:-- 0:00:33 255k100 8513k 100 8513k 0 0 13.3M 0 --:--:-- --:--:-- --:--:-- 84.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2194 0 --:--:-- --:--:-- --:--:-- 2200 98 38.3M 98 37.9M 0 0 43.4M 0 --:--:-- --:--:-- --:--:-- 43.4M100 38.3M 100 38.3M 0 0 43.7M 0 --:--:-- --:--:-- --:--:-- 133M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 566 0 --:--:-- --:--:-- --:--:-- 568 0 0 0 620 0 0 1812 0 --:--:-- --:--:-- --:--:-- 1812 45 10.7M 45 4982k 0 0 7841k 0 0:00:01 --:--:-- 0:00:01 7841k100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 110M ~/nightlyrpmoueyS5/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmoueyS5/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmoueyS5/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmoueyS5 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmoueyS5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmoueyS5/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 271f18197d9040988a0ed69df55fb8a4 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.mns4sxr7:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_GB.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4826151149587774343.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 45900605 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 181 | n54.crusty | 172.19.2.54 | crusty | 3966 | Deployed | 45900605 | None | None | 7 | x86_64 | 1 | 2530 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 30 00:41:11 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 30 Aug 2019 00:41:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #275 In-Reply-To: <878598344.212.1567039254147.JavaMail.jenkins@jenkins.ci.centos.org> References: <878598344.212.1567039254147.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1738516862.321.1567125672035.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Fixing lookup plugin issues in setup-glusto.yml [kshithij.ki] Reverting back to absolute path for glusto.pub ------------------------------------------ [...truncated 289.15 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 30 August 2019 01:40:28 +0100 (0:00:00.304) 0:03:06.954 ********* TASK [container-engine/docker : check length of search domains] **************** Friday 30 August 2019 01:40:28 +0100 (0:00:00.301) 0:03:07.256 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Friday 30 August 2019 01:40:28 +0100 (0:00:00.301) 0:03:07.557 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 30 August 2019 01:40:29 +0100 (0:00:00.289) 0:03:07.846 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 30 August 2019 01:40:29 +0100 (0:00:00.605) 0:03:08.452 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 30 August 2019 01:40:31 +0100 (0:00:01.379) 0:03:09.832 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 30 August 2019 01:40:31 +0100 (0:00:00.262) 0:03:10.094 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 30 August 2019 01:40:31 +0100 (0:00:00.259) 0:03:10.354 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 30 August 2019 01:40:32 +0100 (0:00:00.309) 0:03:10.664 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 30 August 2019 01:40:32 +0100 (0:00:00.310) 0:03:10.974 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 30 August 2019 01:40:32 +0100 (0:00:00.278) 0:03:11.253 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 30 August 2019 01:40:32 +0100 (0:00:00.306) 0:03:11.559 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 30 August 2019 01:40:33 +0100 (0:00:00.297) 0:03:11.856 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 30 August 2019 01:40:33 +0100 (0:00:00.289) 0:03:12.146 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 30 August 2019 01:40:33 +0100 (0:00:00.374) 0:03:12.521 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 30 August 2019 01:40:34 +0100 (0:00:00.400) 0:03:12.922 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 30 August 2019 01:40:34 +0100 (0:00:00.274) 0:03:13.196 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 30 August 2019 01:40:34 +0100 (0:00:00.378) 0:03:13.575 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 30 August 2019 01:40:35 +0100 (0:00:00.281) 0:03:13.857 ********* ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 30 August 2019 01:40:37 +0100 (0:00:01.952) 0:03:15.810 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 30 August 2019 01:40:38 +0100 (0:00:01.079) 0:03:16.890 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 30 August 2019 01:40:38 +0100 (0:00:00.323) 0:03:17.213 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 30 August 2019 01:40:39 +0100 (0:00:01.108) 0:03:18.322 ********* TASK [container-engine/docker : get systemd version] *************************** Friday 30 August 2019 01:40:39 +0100 (0:00:00.305) 0:03:18.627 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 30 August 2019 01:40:40 +0100 (0:00:00.309) 0:03:18.937 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 30 August 2019 01:40:40 +0100 (0:00:00.311) 0:03:19.248 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 30 August 2019 01:40:42 +0100 (0:00:02.266) 0:03:21.514 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 30 August 2019 01:40:45 +0100 (0:00:02.269) 0:03:23.784 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 30 August 2019 01:40:45 +0100 (0:00:00.341) 0:03:24.125 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 30 August 2019 01:40:45 +0100 (0:00:00.239) 0:03:24.364 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 30 August 2019 01:40:47 +0100 (0:00:01.906) 0:03:26.271 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 30 August 2019 01:40:48 +0100 (0:00:01.134) 0:03:27.405 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 30 August 2019 01:40:49 +0100 (0:00:00.288) 0:03:27.694 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 30 August 2019 01:40:53 +0100 (0:00:04.178) 0:03:31.872 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 30 August 2019 01:41:03 +0100 (0:00:10.212) 0:03:42.085 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 30 August 2019 01:41:04 +0100 (0:00:01.286) 0:03:43.372 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 30 August 2019 01:41:05 +0100 (0:00:01.223) 0:03:44.596 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 30 August 2019 01:41:06 +0100 (0:00:00.527) 0:03:45.123 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 30 August 2019 01:41:07 +0100 (0:00:01.140) 0:03:46.264 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 30 August 2019 01:41:08 +0100 (0:00:01.015) 0:03:47.280 ********* TASK [download : Download items] *********************************************** Friday 30 August 2019 01:41:08 +0100 (0:00:00.150) 0:03:47.430 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 30 August 2019 01:41:11 +0100 (0:00:02.780) 0:03:50.210 ********* =============================================================================== Install packages ------------------------------------------------------- 36.24s Wait for host to be available ------------------------------------------ 24.00s gather facts from all instances ---------------------------------------- 17.99s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.22s container-engine/docker : Docker | reload docker ------------------------ 4.18s kubernetes/preinstall : Create kubernetes directories ------------------- 4.17s kubernetes/preinstall : Create cni directories -------------------------- 2.80s download : Download items ----------------------------------------------- 2.78s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.69s Load required kernel modules -------------------------------------------- 2.64s Extend root VG ---------------------------------------------------------- 2.48s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.43s container-engine/docker : Write docker dns systemd drop-in -------------- 2.27s container-engine/docker : Write docker options systemd drop-in ---------- 2.27s download : Download items ----------------------------------------------- 2.21s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.14s download : Sync container ----------------------------------------------- 2.05s kubernetes/preinstall : Set selinux policy ------------------------------ 1.99s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.98s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Aug 30 00:50:30 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 30 Aug 2019 00:50:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10984 - Failure! (release-5 on CentOS-7/x86_64) Message-ID: <669646773.326.1567126230607.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10984 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10984/ to view the results. From ci at centos.org Fri Aug 30 00:50:30 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 30 Aug 2019 00:50:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10986 - Failure! (release-6 on CentOS-6/x86_64) Message-ID: <1270389479.327.1567126230607.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10986 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10986/ to view the results. From ci at centos.org Fri Aug 30 00:51:04 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 30 Aug 2019 00:51:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #300 In-Reply-To: <1598537902.217.1567039864170.JavaMail.jenkins@jenkins.ci.centos.org> References: <1598537902.217.1567039864170.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <966066283.328.1567126264081.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [kshithij.ki] Fixing lookup plugin issues in setup-glusto.yml [kshithij.ki] Reverting back to absolute path for glusto.pub ------------------------------------------ Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave01 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision a96382fb5d5c85392f51884bf96076a7eae84d15 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f a96382fb5d5c85392f51884bf96076a7eae84d15 Commit message: "Merge pull request #75 from kshithijiyer/reverting_back_to_absolute_path" > git rev-list --no-walk 1fe330d34a62b56438f6eb86538286962b1abd90 # timeout=10 [gluster_ansible-infra] $ /bin/sh -xe /tmp/jenkins4203905392878115599.sh + set +x string indices must be integers Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 31 00:16:45 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 31 Aug 2019 00:16:45 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #472 In-Reply-To: <540634741.320.1567124197525.JavaMail.jenkins@jenkins.ci.centos.org> References: <540634741.320.1567124197525.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <208576138.406.1567210605254.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Register glusto public key in a variable ------------------------------------------ [...truncated 40.30 KB...] Installing : gdb-7.6.1-115.el7.x86_64 8/55 Installing : bzip2-1.0.6-13.el7.x86_64 9/55 Installing : elfutils-0.176-2.el7.x86_64 10/55 Installing : distribution-gpg-keys-1.32-1.el7.noarch 11/55 Installing : mock-core-configs-30.4-1.el7.noarch 12/55 Installing : python-srpm-macros-3-32.el7.noarch 13/55 Installing : mercurial-2.6.2-10.el7.x86_64 14/55 Installing : pakchois-0.4-10.el7.x86_64 15/55 Installing : nettle-2.7.1-8.el7.x86_64 16/55 Installing : libmodman-2.0.1-8.el7.x86_64 17/55 Installing : libproxy-0.4.11-11.el7.x86_64 18/55 Installing : perl-Thread-Queue-3.02-2.el7.noarch 19/55 Installing : perl-srpm-macros-1-8.el7.noarch 20/55 Installing : pigz-2.3.4-1.el7.x86_64 21/55 Installing : golang-src-1.11.5-1.el7.noarch 22/55 Installing : unzip-6.0-20.el7.x86_64 23/55 Installing : kernel-headers-3.10.0-1062.el7.x86_64 24/55 Installing : glibc-headers-2.17-292.el7.x86_64 25/55 Installing : glibc-devel-2.17-292.el7.x86_64 26/55 Installing : gcc-4.8.5-39.el7.x86_64 27/55 Installing : zip-3.0-11.el7.x86_64 28/55 Installing : redhat-rpm-config-9.1.0-88.el7.centos.noarch 29/55 Installing : patch-2.7.1-11.el7.x86_64 30/55 Installing : trousers-0.3.14-2.el7.x86_64 31/55 Installing : gnutls-3.3.29-9.el7_6.x86_64 32/55 Installing : neon-0.30.0-4.el7.x86_64 33/55 Installing : subversion-libs-1.7.14-14.el7.x86_64 34/55 Installing : subversion-1.7.14-14.el7.x86_64 35/55 Installing : golang-1.11.5-1.el7.x86_64 36/55 Installing : golang-bin-1.11.5-1.el7.x86_64 37/55 Installing : libtirpc-0.2.4-0.16.el7.x86_64 38/55 Installing : python3-pip-9.0.3-5.el7.noarch 39/55 Installing : python3-setuptools-39.2.0-10.el7.noarch 40/55 Installing : python3-3.6.8-10.el7.x86_64 41/55 Installing : python3-libs-3.6.8-10.el7.x86_64 42/55 Installing : python36-six-1.11.0-3.el7.noarch 43/55 Installing : python36-markupsafe-0.23-3.el7.x86_64 44/55 Installing : python36-jinja2-2.8.1-2.el7.noarch 45/55 Installing : python36-rpm-4.11.3-4.el7.x86_64 46/55 Installing : python36-idna-2.7-2.el7.noarch 47/55 Installing : python36-pysocks-1.6.8-6.el7.noarch 48/55 Installing : python36-urllib3-1.19.1-5.el7.noarch 49/55 Installing : python36-chardet-2.3.0-6.el7.noarch 50/55 Installing : python36-requests-2.12.5-3.el7.noarch 51/55 Installing : python36-pyroute2-0.4.13-2.el7.noarch 52/55 Installing : python36-distro-1.2.0-3.el7.noarch 53/55 Installing : mock-1.4.16-1.el7.noarch 54/55 Installing : rpm-build-4.11.3-40.el7.x86_64 55/55 Verifying : libtirpc-0.2.4-0.16.el7.x86_64 1/55 Verifying : trousers-0.3.14-2.el7.x86_64 2/55 Verifying : python36-idna-2.7-2.el7.noarch 3/55 Verifying : patch-2.7.1-11.el7.x86_64 4/55 Verifying : python36-pysocks-1.6.8-6.el7.noarch 5/55 Verifying : python36-chardet-2.3.0-6.el7.noarch 6/55 Verifying : python3-libs-3.6.8-10.el7.x86_64 7/55 Verifying : zip-3.0-11.el7.x86_64 8/55 Verifying : apr-1.4.8-5.el7.x86_64 9/55 Verifying : subversion-libs-1.7.14-14.el7.x86_64 10/55 Verifying : python36-urllib3-1.19.1-5.el7.noarch 11/55 Verifying : kernel-headers-3.10.0-1062.el7.x86_64 12/55 Verifying : gcc-4.8.5-39.el7.x86_64 13/55 Verifying : unzip-6.0-20.el7.x86_64 14/55 Verifying : python3-pip-9.0.3-5.el7.noarch 15/55 Verifying : golang-src-1.11.5-1.el7.noarch 16/55 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 17/55 Verifying : pigz-2.3.4-1.el7.x86_64 18/55 Verifying : perl-srpm-macros-1-8.el7.noarch 19/55 Verifying : golang-1.11.5-1.el7.x86_64 20/55 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 21/55 Verifying : golang-bin-1.11.5-1.el7.x86_64 22/55 Verifying : neon-0.30.0-4.el7.x86_64 23/55 Verifying : libproxy-0.4.11-11.el7.x86_64 24/55 Verifying : gnutls-3.3.29-9.el7_6.x86_64 25/55 Verifying : mock-1.4.16-1.el7.noarch 26/55 Verifying : libmodman-2.0.1-8.el7.x86_64 27/55 Verifying : mpfr-3.1.1-4.el7.x86_64 28/55 Verifying : rpm-build-4.11.3-40.el7.x86_64 29/55 Verifying : python36-six-1.11.0-3.el7.noarch 30/55 Verifying : apr-util-1.5.2-6.el7.x86_64 31/55 Verifying : nettle-2.7.1-8.el7.x86_64 32/55 Verifying : libmpc-1.0.1-3.el7.x86_64 33/55 Verifying : python3-setuptools-39.2.0-10.el7.noarch 34/55 Verifying : pakchois-0.4-10.el7.x86_64 35/55 Verifying : mercurial-2.6.2-10.el7.x86_64 36/55 Verifying : python-srpm-macros-3-32.el7.noarch 37/55 Verifying : python3-3.6.8-10.el7.x86_64 38/55 Verifying : glibc-devel-2.17-292.el7.x86_64 39/55 Verifying : mock-core-configs-30.4-1.el7.noarch 40/55 Verifying : distribution-gpg-keys-1.32-1.el7.noarch 41/55 Verifying : elfutils-0.176-2.el7.x86_64 42/55 Verifying : bzip2-1.0.6-13.el7.x86_64 43/55 Verifying : subversion-1.7.14-14.el7.x86_64 44/55 Verifying : python36-distro-1.2.0-3.el7.noarch 45/55 Verifying : gdb-7.6.1-115.el7.x86_64 46/55 Verifying : dwz-0.11-3.el7.x86_64 47/55 Verifying : glibc-headers-2.17-292.el7.x86_64 48/55 Verifying : python36-markupsafe-0.23-3.el7.x86_64 49/55 Verifying : cpp-4.8.5-39.el7.x86_64 50/55 Verifying : python36-requests-2.12.5-3.el7.noarch 51/55 Verifying : python36-jinja2-2.8.1-2.el7.noarch 52/55 Verifying : usermode-1.111-6.el7.x86_64 53/55 Verifying : python36-rpm-4.11.3-4.el7.x86_64 54/55 Verifying : redhat-rpm-config-9.1.0-88.el7.centos.noarch 55/55 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-40.el7 Dependency Installed: apr.x86_64 0:1.4.8-5.el7 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-39.el7 distribution-gpg-keys.noarch 0:1.32-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.176-2.el7 gcc.x86_64 0:4.8.5-39.el7 gdb.x86_64 0:7.6.1-115.el7 glibc-devel.x86_64 0:2.17-292.el7 glibc-headers.x86_64 0:2.17-292.el7 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-1062.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 libtirpc.x86_64 0:0.2.4-0.16.el7 mercurial.x86_64 0:2.6.2-10.el7 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-4.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-11.el7 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-srpm-macros.noarch 0:3-32.el7 python3.x86_64 0:3.6.8-10.el7 python3-libs.x86_64 0:3.6.8-10.el7 python3-pip.noarch 0:9.0.3-5.el7 python3-setuptools.noarch 0:39.2.0-10.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-88.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-20.el7 usermode.x86_64 0:1.111-6.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2198 0 --:--:-- --:--:-- --:--:-- 2208 100 8513k 100 8513k 0 0 15.1M 0 --:--:-- --:--:-- --:--:-- 15.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2285 0 --:--:-- --:--:-- --:--:-- 2280 100 38.3M 100 38.3M 0 0 39.9M 0 --:--:-- --:--:-- --:--:-- 39.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 595 0 --:--:-- --:--:-- --:--:-- 597 0 0 0 620 0 0 1746 0 --:--:-- --:--:-- --:--:-- 1746 100 10.7M 100 10.7M 0 0 16.9M 0 --:--:-- --:--:-- --:--:-- 16.9M ~/nightlyrpmRAHnRL/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmRAHnRL/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmRAHnRL/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmRAHnRL ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmRAHnRL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmRAHnRL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f4628d3a92e74bd6ac8938ba226fdb63 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ceqxh2g7:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6041967298370921952.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 51c246f3 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 84 | n20.pufty | 172.19.3.84 | pufty | 3970 | Deployed | 51c246f3 | None | None | 7 | x86_64 | 1 | 2190 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 31 00:41:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 31 Aug 2019 00:41:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #276 In-Reply-To: <1738516862.321.1567125672035.JavaMail.jenkins@jenkins.ci.centos.org> References: <1738516862.321.1567125672035.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <892441319.407.1567212061619.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Register glusto public key in a variable ------------------------------------------ [...truncated 289.54 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 31 August 2019 01:40:17 +0100 (0:00:00.295) 0:03:03.553 ******* TASK [container-engine/docker : check length of search domains] **************** Saturday 31 August 2019 01:40:18 +0100 (0:00:00.305) 0:03:03.859 ******* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 31 August 2019 01:40:18 +0100 (0:00:00.315) 0:03:04.174 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 31 August 2019 01:40:18 +0100 (0:00:00.302) 0:03:04.476 ******* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 31 August 2019 01:40:19 +0100 (0:00:00.567) 0:03:05.044 ******* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 31 August 2019 01:40:20 +0100 (0:00:01.349) 0:03:06.393 ******* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 31 August 2019 01:40:21 +0100 (0:00:00.255) 0:03:06.649 ******* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 31 August 2019 01:40:21 +0100 (0:00:00.290) 0:03:06.940 ******* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 31 August 2019 01:40:21 +0100 (0:00:00.348) 0:03:07.288 ******* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 31 August 2019 01:40:22 +0100 (0:00:00.378) 0:03:07.666 ******* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 31 August 2019 01:40:22 +0100 (0:00:00.287) 0:03:07.954 ******* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 31 August 2019 01:40:22 +0100 (0:00:00.294) 0:03:08.248 ******* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 31 August 2019 01:40:22 +0100 (0:00:00.296) 0:03:08.545 ******* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 31 August 2019 01:40:23 +0100 (0:00:00.288) 0:03:08.833 ******* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 31 August 2019 01:40:23 +0100 (0:00:00.367) 0:03:09.201 ******* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 31 August 2019 01:40:23 +0100 (0:00:00.353) 0:03:09.554 ******* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 31 August 2019 01:40:24 +0100 (0:00:00.297) 0:03:09.852 ******* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 31 August 2019 01:40:24 +0100 (0:00:00.280) 0:03:10.133 ******* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 31 August 2019 01:40:24 +0100 (0:00:00.354) 0:03:10.487 ******* ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 31 August 2019 01:40:26 +0100 (0:00:02.031) 0:03:12.519 ******* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 31 August 2019 01:40:27 +0100 (0:00:01.005) 0:03:13.524 ******* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 31 August 2019 01:40:28 +0100 (0:00:00.290) 0:03:13.815 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 31 August 2019 01:40:29 +0100 (0:00:01.095) 0:03:14.910 ******* TASK [container-engine/docker : get systemd version] *************************** Saturday 31 August 2019 01:40:29 +0100 (0:00:00.385) 0:03:15.295 ******* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 31 August 2019 01:40:29 +0100 (0:00:00.322) 0:03:15.618 ******* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 31 August 2019 01:40:30 +0100 (0:00:00.314) 0:03:15.932 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 31 August 2019 01:40:32 +0100 (0:00:02.235) 0:03:18.168 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 31 August 2019 01:40:34 +0100 (0:00:02.146) 0:03:20.314 ******* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 31 August 2019 01:40:35 +0100 (0:00:00.339) 0:03:20.653 ******* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 31 August 2019 01:40:35 +0100 (0:00:00.241) 0:03:20.895 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 31 August 2019 01:40:37 +0100 (0:00:01.975) 0:03:22.871 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 31 August 2019 01:40:38 +0100 (0:00:01.215) 0:03:24.086 ******* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 31 August 2019 01:40:38 +0100 (0:00:00.291) 0:03:24.378 ******* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 31 August 2019 01:40:42 +0100 (0:00:04.010) 0:03:28.389 ******* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 31 August 2019 01:40:53 +0100 (0:00:10.256) 0:03:38.646 ******* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 31 August 2019 01:40:54 +0100 (0:00:01.348) 0:03:39.994 ******* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 31 August 2019 01:40:55 +0100 (0:00:01.393) 0:03:41.387 ******* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 31 August 2019 01:40:56 +0100 (0:00:00.537) 0:03:41.925 ******* ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 31 August 2019 01:40:57 +0100 (0:00:01.053) 0:03:42.979 ******* changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 31 August 2019 01:40:58 +0100 (0:00:00.995) 0:03:43.975 ******* TASK [download : Download items] *********************************************** Saturday 31 August 2019 01:40:58 +0100 (0:00:00.130) 0:03:44.106 ******* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 31 August 2019 01:41:01 +0100 (0:00:02.786) 0:03:46.892 ******* =============================================================================== Install packages ------------------------------------------------------- 35.24s Wait for host to be available ------------------------------------------ 21.60s gather facts from all instances ---------------------------------------- 17.15s container-engine/docker : Docker | pause while Docker restarts --------- 10.26s Persist loaded modules -------------------------------------------------- 6.26s kubernetes/preinstall : Create kubernetes directories ------------------- 4.19s container-engine/docker : Docker | reload docker ------------------------ 4.01s Gathering Facts --------------------------------------------------------- 3.04s download : Download items ----------------------------------------------- 2.79s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.72s Load required kernel modules -------------------------------------------- 2.66s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.60s kubernetes/preinstall : Create cni directories -------------------------- 2.53s Extend root VG ---------------------------------------------------------- 2.37s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.28s container-engine/docker : Write docker options systemd drop-in ---------- 2.24s container-engine/docker : Write docker dns systemd drop-in -------------- 2.15s download : Sync container ----------------------------------------------- 2.10s download : Download items ----------------------------------------------- 2.04s container-engine/docker : ensure service is started if docker packages are already present --- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Aug 31 01:20:24 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 31 Aug 2019 01:20:24 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #301 In-Reply-To: <966066283.328.1567126264081.JavaMail.jenkins@jenkins.ci.centos.org> References: <966066283.328.1567126264081.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2032292787.413.1567214424577.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Register glusto public key in a variable ------------------------------------------ [...truncated 62.47 KB...] FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? dependency ??? create ??? prepare --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' --> Sanity checks: 'docker' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Determine which docker image info module to use] ************************* ok: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image (new)] ********************************* ok: [localhost] => (item=molecule_local/centos/systemd) TASK [Build an Ansible compatible image (old)] ********************************* skipping: [localhost] => (item=molecule_local/centos/systemd) TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=instance) TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? dependency ??? cleanup ??? destroy ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'cleanup' Skipping, cleanup playbook not configured. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=instance) TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 --> Pruning extra files from scenario ephemeral directory Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0