From ci at centos.org Sat Jun 1 00:16:00 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 1 Jun 2019 00:16:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #381 In-Reply-To: <1316042004.1674.1559261627406.JavaMail.jenkins@jenkins.ci.centos.org> References: <1316042004.1674.1559261627406.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <427790444.1726.1559348160745.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.39 KB...] Total 55 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1716 0 --:--:-- --:--:-- --:--:-- 1718 100 8513k 100 8513k 0 0 11.0M 0 --:--:-- --:--:-- --:--:-- 11.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2041 0 --:--:-- --:--:-- --:--:-- 2049 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 30.5M 0 0:00:01 0:00:01 --:--:-- 41.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 586 0 --:--:-- --:--:-- --:--:-- 588 0 0 0 620 0 0 1831 0 --:--:-- --:--:-- --:--:-- 1831 72 10.7M 72 8017k 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M100 10.7M 100 10.7M 0 0 15.3M 0 --:--:-- --:--:-- --:--:-- 74.4M ~/nightlyrpmDEhtks/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmDEhtks/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmDEhtks/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmDEhtks ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmDEhtks/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmDEhtks/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 6f7cf6dc1bab43ad96cb71cc2fc349dd -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.4cZ6FS:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1394525936145722512.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 7d47af74 +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 66 | n2.pufty | 172.19.3.66 | pufty | 3622 | Deployed | 7d47af74 | None | None | 7 | x86_64 | 1 | 2010 | None | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 1 00:37:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 1 Jun 2019 00:37:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #185 In-Reply-To: <1205065500.1677.1559263024971.JavaMail.jenkins@jenkins.ci.centos.org> References: <1205065500.1677.1559263024971.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <284262163.1729.1559349421913.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.10 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 01 June 2019 01:36:36 +0100 (0:00:00.128) 0:01:57.544 ********* TASK [container-engine/docker : check length of search domains] **************** Saturday 01 June 2019 01:36:36 +0100 (0:00:00.127) 0:01:57.671 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 01 June 2019 01:36:36 +0100 (0:00:00.129) 0:01:57.800 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 01 June 2019 01:36:36 +0100 (0:00:00.126) 0:01:57.927 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 01 June 2019 01:36:36 +0100 (0:00:00.249) 0:01:58.176 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 01 June 2019 01:36:37 +0100 (0:00:00.621) 0:01:58.798 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 01 June 2019 01:36:37 +0100 (0:00:00.116) 0:01:58.915 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 01 June 2019 01:36:37 +0100 (0:00:00.114) 0:01:59.029 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 01 June 2019 01:36:37 +0100 (0:00:00.138) 0:01:59.168 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 01 June 2019 01:36:37 +0100 (0:00:00.141) 0:01:59.310 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 01 June 2019 01:36:37 +0100 (0:00:00.125) 0:01:59.435 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 01 June 2019 01:36:38 +0100 (0:00:00.125) 0:01:59.560 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.125) 0:01:59.686 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.131) 0:01:59.817 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.164) 0:01:59.981 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.150) 0:02:00.132 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 01 June 2019 01:36:38 +0100 (0:00:00.125) 0:02:00.257 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.123) 0:02:00.380 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 01 June 2019 01:36:38 +0100 (0:00:00.127) 0:02:00.507 ********* ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 01 June 2019 01:36:39 +0100 (0:00:00.882) 0:02:01.390 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 01 June 2019 01:36:40 +0100 (0:00:00.500) 0:02:01.891 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 01 June 2019 01:36:40 +0100 (0:00:00.134) 0:02:02.025 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 01 June 2019 01:36:40 +0100 (0:00:00.442) 0:02:02.468 ********* TASK [container-engine/docker : get systemd version] *************************** Saturday 01 June 2019 01:36:41 +0100 (0:00:00.152) 0:02:02.621 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 01 June 2019 01:36:41 +0100 (0:00:00.137) 0:02:02.758 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 01 June 2019 01:36:41 +0100 (0:00:00.151) 0:02:02.910 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 01 June 2019 01:36:42 +0100 (0:00:00.941) 0:02:03.851 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 01 June 2019 01:36:43 +0100 (0:00:01.007) 0:02:04.859 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 01 June 2019 01:36:43 +0100 (0:00:00.138) 0:02:04.997 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 01 June 2019 01:36:43 +0100 (0:00:00.106) 0:02:05.104 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 01 June 2019 01:36:44 +0100 (0:00:00.446) 0:02:05.550 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 01 June 2019 01:36:44 +0100 (0:00:00.533) 0:02:06.084 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 01 June 2019 01:36:44 +0100 (0:00:00.125) 0:02:06.210 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 01 June 2019 01:36:47 +0100 (0:00:03.060) 0:02:09.271 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 01 June 2019 01:36:57 +0100 (0:00:10.106) 0:02:19.377 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 01 June 2019 01:36:58 +0100 (0:00:00.599) 0:02:19.976 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 01 June 2019 01:36:59 +0100 (0:00:00.557) 0:02:20.534 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 01 June 2019 01:36:59 +0100 (0:00:00.216) 0:02:20.750 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 01 June 2019 01:36:59 +0100 (0:00:00.550) 0:02:21.301 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 01 June 2019 01:37:00 +0100 (0:00:00.466) 0:02:21.768 ********* TASK [download : Download items] *********************************************** Saturday 01 June 2019 01:37:00 +0100 (0:00:00.065) 0:02:21.833 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 01 June 2019 01:37:01 +0100 (0:00:01.343) 0:02:23.176 ********* =============================================================================== Install packages ------------------------------------------------------- 25.27s Wait for host to be available ------------------------------------------ 16.26s Extend root VG --------------------------------------------------------- 13.51s gather facts from all instances ---------------------------------------- 10.28s container-engine/docker : Docker | pause while Docker restarts --------- 10.11s Persist loaded modules -------------------------------------------------- 3.43s container-engine/docker : Docker | reload docker ------------------------ 3.06s kubernetes/preinstall : Create kubernetes directories ------------------- 1.96s Load required kernel modules -------------------------------------------- 1.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.57s Extend the root LV and FS to occupy remaining space --------------------- 1.48s download : Download items ----------------------------------------------- 1.34s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.34s Gathering Facts --------------------------------------------------------- 1.28s kubernetes/preinstall : Create cni directories -------------------------- 1.20s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.19s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.14s download : Download items ----------------------------------------------- 1.12s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.09s download : Sync container ----------------------------------------------- 1.06s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 1 01:17:48 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 1 Jun 2019 01:17:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #210 In-Reply-To: <450516586.1678.1559265404127.JavaMail.jenkins@jenkins.ci.centos.org> References: <450516586.1678.1559265404127.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1282733491.1732.1559351868851.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.18 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 2 00:15:55 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 2 Jun 2019 00:15:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #382 In-Reply-To: <427790444.1726.1559348160745.JavaMail.jenkins@jenkins.ci.centos.org> References: <427790444.1726.1559348160745.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <84803452.1778.1559434555672.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.41 KB...] Total 69 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1785 0 --:--:-- --:--:-- --:--:-- 1795 100 8513k 100 8513k 0 0 11.9M 0 --:--:-- --:--:-- --:--:-- 11.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1621 0 --:--:-- --:--:-- --:--:-- 1620 100 38.3M 100 38.3M 0 0 41.1M 0 --:--:-- --:--:-- --:--:-- 41.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 548 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1643 0 --:--:-- --:--:-- --:--:-- 1643 100 10.7M 100 10.7M 0 0 15.7M 0 --:--:-- --:--:-- --:--:-- 15.7M ~/nightlyrpmneuAtU/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmneuAtU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmneuAtU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmneuAtU ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmneuAtU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmneuAtU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M a5b61a1cb9a74e219449c604715546dc -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.uSd3vL:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins518155579958869711.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done a8cd216d +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 190 | n63.crusty | 172.19.2.63 | crusty | 3631 | Deployed | a8cd216d | None | None | 7 | x86_64 | 1 | 2620 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 2 00:36:56 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 2 Jun 2019 00:36:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #186 In-Reply-To: <284262163.1729.1559349421913.JavaMail.jenkins@jenkins.ci.centos.org> References: <284262163.1729.1559349421913.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1093230059.1780.1559435816681.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.14 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 02 June 2019 01:36:30 +0100 (0:00:00.128) 0:01:56.714 *********** TASK [container-engine/docker : check length of search domains] **************** Sunday 02 June 2019 01:36:30 +0100 (0:00:00.129) 0:01:56.844 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 02 June 2019 01:36:31 +0100 (0:00:00.125) 0:01:56.970 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 02 June 2019 01:36:31 +0100 (0:00:00.127) 0:01:57.098 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 02 June 2019 01:36:31 +0100 (0:00:00.250) 0:01:57.348 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.622) 0:01:57.970 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.110) 0:01:58.081 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.110) 0:01:58.191 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.136) 0:01:58.327 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 02 June 2019 01:36:32 +0100 (0:00:00.135) 0:01:58.463 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.125) 0:01:58.588 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 02 June 2019 01:36:32 +0100 (0:00:00.130) 0:01:58.718 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 02 June 2019 01:36:32 +0100 (0:00:00.129) 0:01:58.848 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 02 June 2019 01:36:33 +0100 (0:00:00.132) 0:01:58.981 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 02 June 2019 01:36:33 +0100 (0:00:00.159) 0:01:59.140 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 02 June 2019 01:36:33 +0100 (0:00:00.148) 0:01:59.288 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 02 June 2019 01:36:33 +0100 (0:00:00.123) 0:01:59.411 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 02 June 2019 01:36:33 +0100 (0:00:00.123) 0:01:59.535 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 02 June 2019 01:36:33 +0100 (0:00:00.124) 0:01:59.659 *********** ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 02 June 2019 01:36:34 +0100 (0:00:00.878) 0:02:00.538 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 02 June 2019 01:36:35 +0100 (0:00:00.506) 0:02:01.045 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 02 June 2019 01:36:35 +0100 (0:00:00.126) 0:02:01.172 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 02 June 2019 01:36:35 +0100 (0:00:00.442) 0:02:01.615 *********** TASK [container-engine/docker : get systemd version] *************************** Sunday 02 June 2019 01:36:35 +0100 (0:00:00.142) 0:02:01.757 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 02 June 2019 01:36:35 +0100 (0:00:00.130) 0:02:01.888 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 02 June 2019 01:36:36 +0100 (0:00:00.143) 0:02:02.032 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 02 June 2019 01:36:37 +0100 (0:00:00.956) 0:02:02.988 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 02 June 2019 01:36:37 +0100 (0:00:00.882) 0:02:03.871 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 02 June 2019 01:36:38 +0100 (0:00:00.148) 0:02:04.019 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 02 June 2019 01:36:38 +0100 (0:00:00.115) 0:02:04.134 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 02 June 2019 01:36:38 +0100 (0:00:00.537) 0:02:04.672 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 02 June 2019 01:36:39 +0100 (0:00:00.512) 0:02:05.184 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 02 June 2019 01:36:39 +0100 (0:00:00.145) 0:02:05.330 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 02 June 2019 01:36:42 +0100 (0:00:03.157) 0:02:08.488 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 02 June 2019 01:36:52 +0100 (0:00:10.092) 0:02:18.580 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 02 June 2019 01:36:53 +0100 (0:00:00.586) 0:02:19.167 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 02 June 2019 01:36:53 +0100 (0:00:00.567) 0:02:19.734 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 02 June 2019 01:36:54 +0100 (0:00:00.211) 0:02:19.945 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 02 June 2019 01:36:54 +0100 (0:00:00.579) 0:02:20.525 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 02 June 2019 01:36:55 +0100 (0:00:00.458) 0:02:20.983 *********** TASK [download : Download items] *********************************************** Sunday 02 June 2019 01:36:55 +0100 (0:00:00.060) 0:02:21.044 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 02 June 2019 01:36:56 +0100 (0:00:01.314) 0:02:22.359 *********** =============================================================================== Install packages ------------------------------------------------------- 23.66s Wait for host to be available ------------------------------------------ 16.27s Extend root VG --------------------------------------------------------- 14.20s gather facts from all instances ---------------------------------------- 11.27s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s Persist loaded modules -------------------------------------------------- 3.39s container-engine/docker : Docker | reload docker ------------------------ 3.16s kubernetes/preinstall : Create kubernetes directories ------------------- 2.10s Load required kernel modules -------------------------------------------- 1.66s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.63s Extend the root LV and FS to occupy remaining space --------------------- 1.52s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.41s download : Download items ----------------------------------------------- 1.31s Gathering Facts --------------------------------------------------------- 1.28s kubernetes/preinstall : Create cni directories -------------------------- 1.18s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.09s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.06s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.04s bootstrap-os : check if atomic host ------------------------------------- 0.98s bootstrap-os : Check presence of fastestmirror.conf --------------------- 0.96s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 2 01:16:37 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 2 Jun 2019 01:16:37 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #211 In-Reply-To: <1282733491.1732.1559351868851.JavaMail.jenkins@jenkins.ci.centos.org> References: <1282733491.1732.1559351868851.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1977407045.1782.1559438197273.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 3 00:15:59 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 3 Jun 2019 00:15:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #383 In-Reply-To: <84803452.1778.1559434555672.JavaMail.jenkins@jenkins.ci.centos.org> References: <84803452.1778.1559434555672.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1620496074.1840.1559520959293.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 66 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1958 0 --:--:-- --:--:-- --:--:-- 1970 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 7799k 0 0:00:01 0:00:01 --:--:-- 15.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1946 0 --:--:-- --:--:-- --:--:-- 1953 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 35.0M 0 0:00:01 0:00:01 --:--:-- 51.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 584 0 --:--:-- --:--:-- --:--:-- 586 0 0 0 620 0 0 1849 0 --:--:-- --:--:-- --:--:-- 1849 72 10.7M 72 7920k 0 0 9.8M 0 0:00:01 --:--:-- 0:00:01 9.8M100 10.7M 100 10.7M 0 0 12.3M 0 --:--:-- --:--:-- --:--:-- 37.4M ~/nightlyrpmrI8f1e/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmrI8f1e/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmrI8f1e/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmrI8f1e ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmrI8f1e/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmrI8f1e/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M e7e1a6c217ee499187675f5d7e7863b4 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.5qTar4:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7259798539477292864.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done e4e1b0ec +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 171 | n44.crusty | 172.19.2.44 | crusty | 3634 | Deployed | e4e1b0ec | None | None | 7 | x86_64 | 1 | 2430 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 3 00:40:57 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 3 Jun 2019 00:40:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #187 In-Reply-To: <1093230059.1780.1559435816681.JavaMail.jenkins@jenkins.ci.centos.org> References: <1093230059.1780.1559435816681.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <77435703.1842.1559522457672.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.19 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 03 June 2019 01:40:15 +0100 (0:00:00.296) 0:03:00.540 *********** TASK [container-engine/docker : check length of search domains] **************** Monday 03 June 2019 01:40:15 +0100 (0:00:00.288) 0:03:00.828 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Monday 03 June 2019 01:40:16 +0100 (0:00:00.290) 0:03:01.119 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 03 June 2019 01:40:16 +0100 (0:00:00.275) 0:03:01.395 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 03 June 2019 01:40:16 +0100 (0:00:00.642) 0:03:02.037 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 03 June 2019 01:40:18 +0100 (0:00:01.332) 0:03:03.370 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 03 June 2019 01:40:18 +0100 (0:00:00.266) 0:03:03.636 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 03 June 2019 01:40:18 +0100 (0:00:00.254) 0:03:03.890 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 03 June 2019 01:40:19 +0100 (0:00:00.301) 0:03:04.192 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 03 June 2019 01:40:19 +0100 (0:00:00.299) 0:03:04.492 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 03 June 2019 01:40:19 +0100 (0:00:00.278) 0:03:04.771 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 03 June 2019 01:40:19 +0100 (0:00:00.283) 0:03:05.055 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 03 June 2019 01:40:20 +0100 (0:00:00.279) 0:03:05.334 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 03 June 2019 01:40:20 +0100 (0:00:00.289) 0:03:05.624 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 03 June 2019 01:40:20 +0100 (0:00:00.361) 0:03:05.985 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 03 June 2019 01:40:21 +0100 (0:00:00.338) 0:03:06.324 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 03 June 2019 01:40:21 +0100 (0:00:00.281) 0:03:06.605 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 03 June 2019 01:40:21 +0100 (0:00:00.277) 0:03:06.883 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 03 June 2019 01:40:22 +0100 (0:00:00.284) 0:03:07.167 *********** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 03 June 2019 01:40:24 +0100 (0:00:01.973) 0:03:09.141 *********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 03 June 2019 01:40:25 +0100 (0:00:01.136) 0:03:10.278 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 03 June 2019 01:40:25 +0100 (0:00:00.347) 0:03:10.625 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 03 June 2019 01:40:26 +0100 (0:00:01.072) 0:03:11.698 *********** TASK [container-engine/docker : get systemd version] *************************** Monday 03 June 2019 01:40:26 +0100 (0:00:00.318) 0:03:12.017 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 03 June 2019 01:40:27 +0100 (0:00:00.292) 0:03:12.309 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 03 June 2019 01:40:27 +0100 (0:00:00.312) 0:03:12.621 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 03 June 2019 01:40:29 +0100 (0:00:02.056) 0:03:14.678 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 03 June 2019 01:40:31 +0100 (0:00:02.192) 0:03:16.870 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 03 June 2019 01:40:32 +0100 (0:00:00.360) 0:03:17.230 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 03 June 2019 01:40:32 +0100 (0:00:00.248) 0:03:17.479 *********** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 03 June 2019 01:40:33 +0100 (0:00:01.041) 0:03:18.521 *********** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 03 June 2019 01:40:34 +0100 (0:00:01.158) 0:03:19.679 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 03 June 2019 01:40:34 +0100 (0:00:00.300) 0:03:19.979 *********** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 03 June 2019 01:40:39 +0100 (0:00:04.310) 0:03:24.290 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 03 June 2019 01:40:49 +0100 (0:00:10.210) 0:03:34.500 *********** changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 03 June 2019 01:40:50 +0100 (0:00:01.212) 0:03:35.713 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 03 June 2019 01:40:51 +0100 (0:00:01.235) 0:03:36.948 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 03 June 2019 01:40:52 +0100 (0:00:00.508) 0:03:37.456 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 03 June 2019 01:40:53 +0100 (0:00:01.150) 0:03:38.606 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 03 June 2019 01:40:54 +0100 (0:00:00.935) 0:03:39.542 *********** TASK [download : Download items] *********************************************** Monday 03 June 2019 01:40:54 +0100 (0:00:00.111) 0:03:39.653 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 03 June 2019 01:40:57 +0100 (0:00:02.689) 0:03:42.343 *********** =============================================================================== Install packages ------------------------------------------------------- 32.53s Wait for host to be available ------------------------------------------ 24.03s gather facts from all instances ---------------------------------------- 17.25s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.22s container-engine/docker : Docker | reload docker ------------------------ 4.31s kubernetes/preinstall : Create kubernetes directories ------------------- 3.95s download : Download items ----------------------------------------------- 2.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.65s Load required kernel modules -------------------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.59s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.42s Extend root VG ---------------------------------------------------------- 2.42s container-engine/docker : Write docker dns systemd drop-in -------------- 2.19s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.11s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.10s container-engine/docker : Write docker options systemd drop-in ---------- 2.06s download : Sync container ----------------------------------------------- 2.04s Gathering Facts --------------------------------------------------------- 2.04s download : Download items ----------------------------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 3 01:17:18 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 3 Jun 2019 01:17:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #212 In-Reply-To: <1977407045.1782.1559438197273.JavaMail.jenkins@jenkins.ci.centos.org> References: <1977407045.1782.1559438197273.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <385592732.1843.1559524638152.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 4 00:15:56 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 4 Jun 2019 00:15:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #384 In-Reply-To: <1620496074.1840.1559520959293.JavaMail.jenkins@jenkins.ci.centos.org> References: <1620496074.1840.1559520959293.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1375301233.1899.1559607356488.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] Add centosci job for running clang-scan on gluster block (#66) ------------------------------------------ [...truncated 37.40 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1667 0 --:--:-- --:--:-- --:--:-- 1675 24 8513k 24 2092k 0 0 3378k 0 0:00:02 --:--:-- 0:00:02 3378k100 8513k 100 8513k 0 0 11.3M 0 --:--:-- --:--:-- --:--:-- 55.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1865 0 --:--:-- --:--:-- --:--:-- 1871 47 38.3M 47 18.4M 0 0 21.1M 0 0:00:01 --:--:-- 0:00:01 21.1M100 38.3M 100 38.3M 0 0 24.9M 0 0:00:01 0:00:01 --:--:-- 29.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 548 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1500 0 --:--:-- --:--:-- --:--:-- 1500 73 10.7M 73 8128k 0 0 9266k 0 0:00:01 --:--:-- 0:00:01 9266k100 10.7M 100 10.7M 0 0 10.9M 0 --:--:-- --:--:-- --:--:-- 28.5M ~/nightlyrpm2VhN8z/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm2VhN8z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm2VhN8z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm2VhN8z ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm2VhN8z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm2VhN8z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 0861db8bad02429ba95dd894cbe2a034 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.9v8vnl:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1353427189682093879.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 3061917f +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 101 | n37.pufty | 172.19.3.101 | pufty | 3638 | Deployed | 3061917f | None | None | 7 | x86_64 | 1 | 2360 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 4 00:42:13 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 4 Jun 2019 00:42:13 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #188 In-Reply-To: <77435703.1842.1559522457672.JavaMail.jenkins@jenkins.ci.centos.org> References: <77435703.1842.1559522457672.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1035220552.1900.1559608933838.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] Add centosci job for running clang-scan on gluster block (#66) ------------------------------------------ [...truncated 287.25 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 04 June 2019 01:41:31 +0100 (0:00:00.298) 0:02:57.629 ********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 04 June 2019 01:41:32 +0100 (0:00:00.350) 0:02:57.980 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 04 June 2019 01:41:32 +0100 (0:00:00.365) 0:02:58.345 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 04 June 2019 01:41:32 +0100 (0:00:00.310) 0:02:58.656 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 04 June 2019 01:41:33 +0100 (0:00:00.524) 0:02:59.181 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 04 June 2019 01:41:34 +0100 (0:00:01.326) 0:03:00.507 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 04 June 2019 01:41:34 +0100 (0:00:00.297) 0:03:00.805 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 04 June 2019 01:41:35 +0100 (0:00:00.281) 0:03:01.086 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 04 June 2019 01:41:35 +0100 (0:00:00.309) 0:03:01.396 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 04 June 2019 01:41:35 +0100 (0:00:00.303) 0:03:01.699 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 04 June 2019 01:41:36 +0100 (0:00:00.275) 0:03:01.974 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 04 June 2019 01:41:36 +0100 (0:00:00.291) 0:03:02.266 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 04 June 2019 01:41:36 +0100 (0:00:00.283) 0:03:02.550 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 04 June 2019 01:41:36 +0100 (0:00:00.277) 0:03:02.828 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 04 June 2019 01:41:37 +0100 (0:00:00.360) 0:03:03.188 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 04 June 2019 01:41:37 +0100 (0:00:00.355) 0:03:03.544 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 04 June 2019 01:41:37 +0100 (0:00:00.277) 0:03:03.822 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 04 June 2019 01:41:38 +0100 (0:00:00.283) 0:03:04.106 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 04 June 2019 01:41:38 +0100 (0:00:00.284) 0:03:04.390 ********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 04 June 2019 01:41:40 +0100 (0:00:01.988) 0:03:06.379 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 04 June 2019 01:41:41 +0100 (0:00:01.054) 0:03:07.434 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 04 June 2019 01:41:41 +0100 (0:00:00.280) 0:03:07.715 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 04 June 2019 01:41:42 +0100 (0:00:00.986) 0:03:08.701 ********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 04 June 2019 01:41:43 +0100 (0:00:00.321) 0:03:09.022 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 04 June 2019 01:41:43 +0100 (0:00:00.297) 0:03:09.320 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 04 June 2019 01:41:43 +0100 (0:00:00.303) 0:03:09.624 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 04 June 2019 01:41:45 +0100 (0:00:02.114) 0:03:11.739 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 04 June 2019 01:41:47 +0100 (0:00:01.991) 0:03:13.730 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 04 June 2019 01:41:48 +0100 (0:00:00.292) 0:03:14.023 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 04 June 2019 01:41:48 +0100 (0:00:00.235) 0:03:14.258 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 04 June 2019 01:41:49 +0100 (0:00:00.895) 0:03:15.154 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 04 June 2019 01:41:50 +0100 (0:00:01.187) 0:03:16.342 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 04 June 2019 01:41:50 +0100 (0:00:00.324) 0:03:16.666 ********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 04 June 2019 01:41:54 +0100 (0:00:04.198) 0:03:20.865 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 04 June 2019 01:42:05 +0100 (0:00:10.218) 0:03:31.083 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 04 June 2019 01:42:06 +0100 (0:00:01.308) 0:03:32.392 ********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 04 June 2019 01:42:07 +0100 (0:00:01.256) 0:03:33.648 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 04 June 2019 01:42:08 +0100 (0:00:00.524) 0:03:34.173 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 04 June 2019 01:42:09 +0100 (0:00:01.181) 0:03:35.354 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 04 June 2019 01:42:10 +0100 (0:00:01.082) 0:03:36.437 ********** TASK [download : Download items] *********************************************** Tuesday 04 June 2019 01:42:10 +0100 (0:00:00.126) 0:03:36.563 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 04 June 2019 01:42:13 +0100 (0:00:02.714) 0:03:39.277 ********** =============================================================================== Install packages ------------------------------------------------------- 32.47s Wait for host to be available ------------------------------------------ 21.75s gather facts from all instances ---------------------------------------- 17.15s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.21s container-engine/docker : Docker | reload docker ------------------------ 4.20s kubernetes/preinstall : Create kubernetes directories ------------------- 3.99s download : Download items ----------------------------------------------- 2.71s Load required kernel modules -------------------------------------------- 2.66s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.66s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.65s kubernetes/preinstall : Create cni directories -------------------------- 2.58s Extend root VG ---------------------------------------------------------- 2.45s Gathering Facts --------------------------------------------------------- 2.39s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.35s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.12s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.12s container-engine/docker : Write docker options systemd drop-in ---------- 2.12s container-engine/docker : Write docker dns systemd drop-in -------------- 1.99s container-engine/docker : ensure service is started if docker packages are already present --- 1.99s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 4 01:22:57 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 4 Jun 2019 01:22:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #213 In-Reply-To: <385592732.1843.1559524638152.JavaMail.jenkins@jenkins.ci.centos.org> References: <385592732.1843.1559524638152.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1613691823.1904.1559611377865.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] Add centosci job for running clang-scan on gluster block (#66) ------------------------------------------ [...truncated 56.50 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 5 00:16:07 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 5 Jun 2019 00:16:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #385 In-Reply-To: <1375301233.1899.1559607356488.JavaMail.jenkins@jenkins.ci.centos.org> References: <1375301233.1899.1559607356488.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2027601197.72.1559693767535.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add centosci job for running clang-scan on gluster block [dkhandel] Add option for clang-scan to view reports on the browser [dkhandel] Give executable permission to clang script [dkhandel] Intsall mock package on the node [dkhandel] Fix the typo ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 20/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 21/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 22/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 23/52 Installing : elfutils-0.172-2.el7.x86_64 24/52 Installing : unzip-6.0-19.el7.x86_64 25/52 Installing : dwz-0.11-3.el7.x86_64 26/52 Installing : bzip2-1.0.6-13.el7.x86_64 27/52 Installing : usermode-1.111-5.el7.x86_64 28/52 Installing : pakchois-0.4-10.el7.x86_64 29/52 Installing : patch-2.7.1-10.el7_5.x86_64 30/52 Installing : distribution-gpg-keys-1.30-1.el7.noarch 31/52 Installing : mock-core-configs-30.2-1.el7.noarch 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : libmpc-1.0.1-3.el7.x86_64 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : neon-0.30.0-3.el7.x86_64 37/52 Verifying : mock-core-configs-30.2-1.el7.noarch 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1683 0 --:--:-- --:--:-- --:--:-- 1685 100 8513k 100 8513k 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2199 0 --:--:-- --:--:-- --:--:-- 2200 100 38.3M 100 38.3M 0 0 37.2M 0 0:00:01 0:00:01 --:--:-- 37.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 534 0 --:--:-- --:--:-- --:--:-- 534 0 0 0 620 0 0 1531 0 --:--:-- --:--:-- --:--:-- 1531 87 10.7M 87 9665k 0 0 11.9M 0 --:--:-- --:--:-- --:--:-- 11.9M100 10.7M 100 10.7M 0 0 13.2M 0 --:--:-- --:--:-- --:--:-- 76.1M ~/nightlyrpmlN7x1z/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmlN7x1z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmlN7x1z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmlN7x1z ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmlN7x1z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmlN7x1z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 9135fa56ba734a77b31536cb777e8731 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.c8a9kx5g:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3476533769717681474.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b270ca6c +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 91 | n27.pufty | 172.19.3.91 | pufty | 3642 | Deployed | b270ca6c | None | None | 7 | x86_64 | 1 | 2260 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 5 00:37:04 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 5 Jun 2019 00:37:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #189 In-Reply-To: <1035220552.1900.1559608933838.JavaMail.jenkins@jenkins.ci.centos.org> References: <1035220552.1900.1559608933838.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1614334885.73.1559695024635.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add centosci job for running clang-scan on gluster block [dkhandel] Add option for clang-scan to view reports on the browser [dkhandel] Give executable permission to clang script [dkhandel] Intsall mock package on the node [dkhandel] Fix the typo ------------------------------------------ [...truncated 287.11 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 05 June 2019 01:36:38 +0100 (0:00:00.124) 0:01:55.072 ******** TASK [container-engine/docker : check length of search domains] **************** Wednesday 05 June 2019 01:36:39 +0100 (0:00:00.122) 0:01:55.195 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 05 June 2019 01:36:39 +0100 (0:00:00.125) 0:01:55.320 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 05 June 2019 01:36:39 +0100 (0:00:00.125) 0:01:55.445 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 05 June 2019 01:36:39 +0100 (0:00:00.246) 0:01:55.691 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.621) 0:01:56.313 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.113) 0:01:56.426 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.108) 0:01:56.535 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.134) 0:01:56.670 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.131) 0:01:56.801 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.120) 0:01:56.922 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 05 June 2019 01:36:40 +0100 (0:00:00.121) 0:01:57.044 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.118) 0:01:57.162 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.119) 0:01:57.282 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.152) 0:01:57.434 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.149) 0:01:57.584 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.118) 0:01:57.703 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.119) 0:01:57.822 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 05 June 2019 01:36:41 +0100 (0:00:00.119) 0:01:57.941 ******** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 05 June 2019 01:36:42 +0100 (0:00:00.878) 0:01:58.820 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 05 June 2019 01:36:43 +0100 (0:00:00.504) 0:01:59.324 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 05 June 2019 01:36:43 +0100 (0:00:00.120) 0:01:59.445 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 05 June 2019 01:36:43 +0100 (0:00:00.557) 0:02:00.002 ******** TASK [container-engine/docker : get systemd version] *************************** Wednesday 05 June 2019 01:36:43 +0100 (0:00:00.129) 0:02:00.132 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 05 June 2019 01:36:44 +0100 (0:00:00.132) 0:02:00.264 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 05 June 2019 01:36:44 +0100 (0:00:00.144) 0:02:00.408 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 05 June 2019 01:36:45 +0100 (0:00:00.942) 0:02:01.351 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 05 June 2019 01:36:46 +0100 (0:00:00.901) 0:02:02.253 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 05 June 2019 01:36:46 +0100 (0:00:00.146) 0:02:02.399 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 05 June 2019 01:36:46 +0100 (0:00:00.108) 0:02:02.508 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 05 June 2019 01:36:46 +0100 (0:00:00.433) 0:02:02.941 ******** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 05 June 2019 01:36:47 +0100 (0:00:00.596) 0:02:03.537 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 05 June 2019 01:36:47 +0100 (0:00:00.123) 0:02:03.660 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 05 June 2019 01:36:50 +0100 (0:00:03.049) 0:02:06.709 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 05 June 2019 01:37:00 +0100 (0:00:10.092) 0:02:16.802 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 05 June 2019 01:37:01 +0100 (0:00:00.563) 0:02:17.365 ******** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 05 June 2019 01:37:01 +0100 (0:00:00.563) 0:02:17.929 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 05 June 2019 01:37:01 +0100 (0:00:00.208) 0:02:18.137 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 05 June 2019 01:37:02 +0100 (0:00:00.514) 0:02:18.652 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 05 June 2019 01:37:02 +0100 (0:00:00.432) 0:02:19.084 ******** TASK [download : Download items] *********************************************** Wednesday 05 June 2019 01:37:02 +0100 (0:00:00.056) 0:02:19.141 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 05 June 2019 01:37:04 +0100 (0:00:01.385) 0:02:20.526 ******** =============================================================================== Install packages ------------------------------------------------------- 24.25s Wait for host to be available ------------------------------------------ 16.24s Extend root VG --------------------------------------------------------- 13.04s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s gather facts from all instances ----------------------------------------- 9.96s Persist loaded modules -------------------------------------------------- 3.30s container-engine/docker : Docker | reload docker ------------------------ 3.05s Gathering Facts --------------------------------------------------------- 2.04s kubernetes/preinstall : Create kubernetes directories ------------------- 1.77s Load required kernel modules -------------------------------------------- 1.73s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.59s Extend the root LV and FS to occupy remaining space --------------------- 1.57s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.43s download : Download items ----------------------------------------------- 1.39s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.20s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.15s kubernetes/preinstall : Create cni directories -------------------------- 1.12s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.06s download : Download items ----------------------------------------------- 0.96s container-engine/docker : Write docker options systemd drop-in ---------- 0.94s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 5 01:16:38 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 5 Jun 2019 01:16:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #214 In-Reply-To: <1613691823.1904.1559611377865.JavaMail.jenkins@jenkins.ci.centos.org> References: <1613691823.1904.1559611377865.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1908525810.82.1559697398679.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add centosci job for running clang-scan on gluster block [dkhandel] Add option for clang-scan to view reports on the browser [dkhandel] Give executable permission to clang script [dkhandel] Intsall mock package on the node [dkhandel] Fix the typo ------------------------------------------ [...truncated 56.42 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 6 00:16:02 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 6 Jun 2019 00:16:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #386 In-Reply-To: <2027601197.72.1559693767535.JavaMail.jenkins@jenkins.ci.centos.org> References: <2027601197.72.1559693767535.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <574461503.171.1559780162879.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : zip-3.0-11.el7.x86_64 37/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 38/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 39/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 40/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 41/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : nettle-2.7.1-8.el7.x86_64 11/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1809 0 --:--:-- --:--:-- --:--:-- 1816 100 8513k 100 8513k 0 0 12.4M 0 --:--:-- --:--:-- --:--:-- 12.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2002 0 --:--:-- --:--:-- --:--:-- 2009 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 37.5M 0 0:00:01 0:00:01 --:--:-- 57.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 553 0 --:--:-- --:--:-- --:--:-- 554 0 0 0 620 0 0 1678 0 --:--:-- --:--:-- --:--:-- 1678 100 10.7M 100 10.7M 0 0 16.5M 0 --:--:-- --:--:-- --:--:-- 16.5M ~/nightlyrpmTGRqFl/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmTGRqFl/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmTGRqFl/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmTGRqFl ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmTGRqFl/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmTGRqFl/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M e40ed037b74140baa708e3716f3f2cb6 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.5o1no4me:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins373550421914151926.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9fc892dc +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 67 | n3.pufty | 172.19.3.67 | pufty | 3646 | Deployed | 9fc892dc | None | None | 7 | x86_64 | 1 | 2020 | None | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 6 00:40:57 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 6 Jun 2019 00:40:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #190 In-Reply-To: <1614334885.73.1559695024635.JavaMail.jenkins@jenkins.ci.centos.org> References: <1614334885.73.1559695024635.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <261437696.172.1559781657856.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.17 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 06 June 2019 01:40:15 +0100 (0:00:00.284) 0:02:59.781 ********* TASK [container-engine/docker : check length of search domains] **************** Thursday 06 June 2019 01:40:15 +0100 (0:00:00.281) 0:03:00.062 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 06 June 2019 01:40:15 +0100 (0:00:00.298) 0:03:00.361 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 06 June 2019 01:40:16 +0100 (0:00:00.287) 0:03:00.649 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 06 June 2019 01:40:16 +0100 (0:00:00.627) 0:03:01.276 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 06 June 2019 01:40:18 +0100 (0:00:01.308) 0:03:02.584 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 06 June 2019 01:40:18 +0100 (0:00:00.285) 0:03:02.870 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 06 June 2019 01:40:18 +0100 (0:00:00.287) 0:03:03.157 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 06 June 2019 01:40:18 +0100 (0:00:00.299) 0:03:03.457 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 06 June 2019 01:40:19 +0100 (0:00:00.329) 0:03:03.786 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 06 June 2019 01:40:19 +0100 (0:00:00.277) 0:03:04.063 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 06 June 2019 01:40:19 +0100 (0:00:00.283) 0:03:04.347 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 06 June 2019 01:40:20 +0100 (0:00:00.336) 0:03:04.683 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 06 June 2019 01:40:20 +0100 (0:00:00.309) 0:03:04.992 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 06 June 2019 01:40:20 +0100 (0:00:00.360) 0:03:05.353 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 06 June 2019 01:40:21 +0100 (0:00:00.375) 0:03:05.729 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 06 June 2019 01:40:21 +0100 (0:00:00.327) 0:03:06.056 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 06 June 2019 01:40:21 +0100 (0:00:00.311) 0:03:06.367 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 06 June 2019 01:40:22 +0100 (0:00:00.284) 0:03:06.652 ********* ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 06 June 2019 01:40:23 +0100 (0:00:01.910) 0:03:08.562 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 06 June 2019 01:40:25 +0100 (0:00:01.230) 0:03:09.793 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 06 June 2019 01:40:25 +0100 (0:00:00.301) 0:03:10.094 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 06 June 2019 01:40:26 +0100 (0:00:01.048) 0:03:11.143 ********* TASK [container-engine/docker : get systemd version] *************************** Thursday 06 June 2019 01:40:26 +0100 (0:00:00.296) 0:03:11.439 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 06 June 2019 01:40:27 +0100 (0:00:00.423) 0:03:11.863 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 06 June 2019 01:40:27 +0100 (0:00:00.313) 0:03:12.177 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 06 June 2019 01:40:29 +0100 (0:00:02.060) 0:03:14.238 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 06 June 2019 01:40:31 +0100 (0:00:01.980) 0:03:16.218 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 06 June 2019 01:40:31 +0100 (0:00:00.311) 0:03:16.530 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 06 June 2019 01:40:32 +0100 (0:00:00.228) 0:03:16.759 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 06 June 2019 01:40:33 +0100 (0:00:00.905) 0:03:17.665 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 06 June 2019 01:40:34 +0100 (0:00:01.222) 0:03:18.887 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 06 June 2019 01:40:34 +0100 (0:00:00.273) 0:03:19.161 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 06 June 2019 01:40:38 +0100 (0:00:04.325) 0:03:23.486 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 06 June 2019 01:40:49 +0100 (0:00:10.210) 0:03:33.697 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 06 June 2019 01:40:50 +0100 (0:00:01.236) 0:03:34.933 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 06 June 2019 01:40:51 +0100 (0:00:01.406) 0:03:36.340 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 06 June 2019 01:40:52 +0100 (0:00:00.502) 0:03:36.842 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 06 June 2019 01:40:53 +0100 (0:00:01.244) 0:03:38.087 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 06 June 2019 01:40:54 +0100 (0:00:01.040) 0:03:39.128 ********* TASK [download : Download items] *********************************************** Thursday 06 June 2019 01:40:54 +0100 (0:00:00.105) 0:03:39.234 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 06 June 2019 01:40:57 +0100 (0:00:02.808) 0:03:42.043 ********* =============================================================================== Install packages ------------------------------------------------------- 33.48s Wait for host to be available ------------------------------------------ 24.07s gather facts from all instances ---------------------------------------- 15.89s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 5.90s container-engine/docker : Docker | reload docker ------------------------ 4.33s kubernetes/preinstall : Create kubernetes directories ------------------- 4.04s download : Download items ----------------------------------------------- 2.81s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.73s Load required kernel modules -------------------------------------------- 2.58s kubernetes/preinstall : Create cni directories -------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.46s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.44s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.23s download : Download items ----------------------------------------------- 2.16s container-engine/docker : Write docker options systemd drop-in ---------- 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s download : Sync container ----------------------------------------------- 2.00s container-engine/docker : Write docker dns systemd drop-in -------------- 1.98s kubernetes/preinstall : Set selinux policy ------------------------------ 1.97s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 6 01:14:44 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 6 Jun 2019 01:14:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #215 In-Reply-To: <1908525810.82.1559697398679.JavaMail.jenkins@jenkins.ci.centos.org> References: <1908525810.82.1559697398679.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1817658585.173.1559783684831.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 7 00:16:09 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 7 Jun 2019 00:16:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #387 In-Reply-To: <574461503.171.1559780162879.JavaMail.jenkins@jenkins.ci.centos.org> References: <574461503.171.1559780162879.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <99500961.246.1559866569511.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : zip-3.0-11.el7.x86_64 37/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 38/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 39/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 40/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 41/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : nettle-2.7.1-8.el7.x86_64 11/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1764 0 --:--:-- --:--:-- --:--:-- 1763 100 8513k 100 8513k 0 0 11.8M 0 --:--:-- --:--:-- --:--:-- 11.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1922 0 --:--:-- --:--:-- --:--:-- 1929 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 28.0M 0 0:00:01 0:00:01 --:--:-- 42.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 526 0 --:--:-- --:--:-- --:--:-- 527 0 0 0 620 0 0 1484 0 --:--:-- --:--:-- --:--:-- 1484 22 10.7M 22 2463k 0 0 3953k 0 0:00:02 --:--:-- 0:00:02 3953k100 10.7M 100 10.7M 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 41.0M ~/nightlyrpmgw9kol/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmgw9kol/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmgw9kol/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmgw9kol ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmgw9kol/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmgw9kol/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8103e74dd8be42e2af3f3129f3bb4146 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.7f0px51z:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3555159164934867448.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done bc40b7aa +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 184 | n57.crusty | 172.19.2.57 | crusty | 3650 | Deployed | bc40b7aa | None | None | 7 | x86_64 | 1 | 2560 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 7 00:37:10 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 7 Jun 2019 00:37:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #191 In-Reply-To: <261437696.172.1559781657856.JavaMail.jenkins@jenkins.ci.centos.org> References: <261437696.172.1559781657856.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <940507328.247.1559867830586.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.01 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 07 June 2019 01:36:44 +0100 (0:00:00.126) 0:01:56.625 *********** TASK [container-engine/docker : check length of search domains] **************** Friday 07 June 2019 01:36:44 +0100 (0:00:00.122) 0:01:56.748 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Friday 07 June 2019 01:36:45 +0100 (0:00:00.131) 0:01:56.879 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 07 June 2019 01:36:45 +0100 (0:00:00.125) 0:01:57.005 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 07 June 2019 01:36:45 +0100 (0:00:00.246) 0:01:57.252 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 07 June 2019 01:36:46 +0100 (0:00:00.640) 0:01:57.892 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 07 June 2019 01:36:46 +0100 (0:00:00.110) 0:01:58.003 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 07 June 2019 01:36:46 +0100 (0:00:00.106) 0:01:58.109 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 07 June 2019 01:36:46 +0100 (0:00:00.139) 0:01:58.248 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 07 June 2019 01:36:46 +0100 (0:00:00.129) 0:01:58.378 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 07 June 2019 01:36:46 +0100 (0:00:00.122) 0:01:58.501 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 07 June 2019 01:36:46 +0100 (0:00:00.123) 0:01:58.624 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 07 June 2019 01:36:46 +0100 (0:00:00.120) 0:01:58.745 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 07 June 2019 01:36:47 +0100 (0:00:00.121) 0:01:58.866 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 07 June 2019 01:36:47 +0100 (0:00:00.150) 0:01:59.016 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 07 June 2019 01:36:47 +0100 (0:00:00.149) 0:01:59.166 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 07 June 2019 01:36:47 +0100 (0:00:00.122) 0:01:59.288 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 07 June 2019 01:36:47 +0100 (0:00:00.128) 0:01:59.416 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 07 June 2019 01:36:47 +0100 (0:00:00.129) 0:01:59.546 *********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 07 June 2019 01:36:48 +0100 (0:00:00.878) 0:02:00.424 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 07 June 2019 01:36:49 +0100 (0:00:00.500) 0:02:00.924 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 07 June 2019 01:36:49 +0100 (0:00:00.122) 0:02:01.047 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 07 June 2019 01:36:49 +0100 (0:00:00.451) 0:02:01.499 *********** TASK [container-engine/docker : get systemd version] *************************** Friday 07 June 2019 01:36:49 +0100 (0:00:00.129) 0:02:01.628 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 07 June 2019 01:36:49 +0100 (0:00:00.129) 0:02:01.758 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 07 June 2019 01:36:50 +0100 (0:00:00.145) 0:02:01.903 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 07 June 2019 01:36:50 +0100 (0:00:00.887) 0:02:02.790 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 07 June 2019 01:36:51 +0100 (0:00:00.897) 0:02:03.688 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 07 June 2019 01:36:52 +0100 (0:00:00.132) 0:02:03.820 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 07 June 2019 01:36:52 +0100 (0:00:00.102) 0:02:03.922 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 07 June 2019 01:36:52 +0100 (0:00:00.423) 0:02:04.346 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 07 June 2019 01:36:53 +0100 (0:00:00.548) 0:02:04.894 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 07 June 2019 01:36:53 +0100 (0:00:00.133) 0:02:05.028 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 07 June 2019 01:36:56 +0100 (0:00:03.040) 0:02:08.068 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 07 June 2019 01:37:06 +0100 (0:00:10.097) 0:02:18.165 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 07 June 2019 01:37:06 +0100 (0:00:00.575) 0:02:18.741 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 07 June 2019 01:37:07 +0100 (0:00:00.693) 0:02:19.434 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 07 June 2019 01:37:07 +0100 (0:00:00.212) 0:02:19.647 *********** ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 07 June 2019 01:37:08 +0100 (0:00:00.622) 0:02:20.269 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 07 June 2019 01:37:08 +0100 (0:00:00.441) 0:02:20.711 *********** TASK [download : Download items] *********************************************** Friday 07 June 2019 01:37:08 +0100 (0:00:00.064) 0:02:20.776 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 07 June 2019 01:37:10 +0100 (0:00:01.358) 0:02:22.134 *********** =============================================================================== Install packages ------------------------------------------------------- 25.66s Wait for host to be available ------------------------------------------ 16.26s Extend root VG --------------------------------------------------------- 13.16s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s gather facts from all instances ---------------------------------------- 10.00s Persist loaded modules -------------------------------------------------- 3.43s container-engine/docker : Docker | reload docker ------------------------ 3.04s kubernetes/preinstall : Create kubernetes directories ------------------- 1.89s Load required kernel modules -------------------------------------------- 1.75s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.51s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.47s Extend the root LV and FS to occupy remaining space --------------------- 1.46s download : Download items ----------------------------------------------- 1.36s kubernetes/preinstall : Create cni directories -------------------------- 1.25s Gathering Facts --------------------------------------------------------- 1.24s download : Download items ----------------------------------------------- 1.15s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.14s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.08s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.06s download : Sync container ----------------------------------------------- 1.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 7 01:17:20 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 7 Jun 2019 01:17:20 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #216 In-Reply-To: <1817658585.173.1559783684831.JavaMail.jenkins@jenkins.ci.centos.org> References: <1817658585.173.1559783684831.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <670109042.255.1559870240747.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 8 00:15:59 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 8 Jun 2019 00:15:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #388 In-Reply-To: <99500961.246.1559866569511.JavaMail.jenkins@jenkins.ci.centos.org> References: <99500961.246.1559866569511.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1920464161.295.1559952959569.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : zip-3.0-11.el7.x86_64 37/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 38/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 39/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 40/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 41/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : nettle-2.7.1-8.el7.x86_64 11/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1676 0 --:--:-- --:--:-- --:--:-- 1680 100 8513k 100 8513k 0 0 13.4M 0 --:--:-- --:--:-- --:--:-- 13.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1958 0 --:--:-- --:--:-- --:--:-- 1959 45 38.3M 45 17.6M 0 0 17.3M 0 0:00:02 0:00:01 0:00:01 17.3M100 38.3M 100 38.3M 0 0 23.1M 0 0:00:01 0:00:01 --:--:-- 32.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 525 0 --:--:-- --:--:-- --:--:-- 527 0 0 0 620 0 0 1459 0 --:--:-- --:--:-- --:--:-- 1459 100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 14.6M ~/nightlyrpmtf2kmJ/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmtf2kmJ/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmtf2kmJ/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmtf2kmJ ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmtf2kmJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmtf2kmJ/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3110c3ce72fe4cb9abd7059d33ca959f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.zkrvzt3x:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7541538082021080811.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9a5f1700 +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 131 | n4.crusty | 172.19.2.4 | crusty | 3654 | Deployed | 9a5f1700 | None | None | 7 | x86_64 | 1 | 2030 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 8 01:19:18 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 8 Jun 2019 01:19:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #217 In-Reply-To: <670109042.255.1559870240747.JavaMail.jenkins@jenkins.ci.centos.org> References: <670109042.255.1559870240747.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1311242990.297.1559956758673.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.47 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 9 00:13:48 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 9 Jun 2019 00:13:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #389 In-Reply-To: <1920464161.295.1559952959569.JavaMail.jenkins@jenkins.ci.centos.org> References: <1920464161.295.1559952959569.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1845729718.349.1560039229034.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.62 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : zip-3.0-11.el7.x86_64 37/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 38/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 39/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 40/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 41/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : nettle-2.7.1-8.el7.x86_64 11/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2678 0 --:--:-- --:--:-- --:--:-- 2688 94 8513k 94 8063k 0 0 17.0M 0 --:--:-- --:--:-- --:--:-- 17.0M100 8513k 100 8513k 0 0 17.8M 0 --:--:-- --:--:-- --:--:-- 146M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3120 0 --:--:-- --:--:-- --:--:-- 3135 100 38.3M 100 38.3M 0 0 53.2M 0 --:--:-- --:--:-- --:--:-- 53.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 984 0 --:--:-- --:--:-- --:--:-- 987 0 0 0 620 0 0 2637 0 --:--:-- --:--:-- --:--:-- 2637 12 10.7M 12 1425k 0 0 3469k 0 0:00:03 --:--:-- 0:00:03 3469k100 10.7M 100 10.7M 0 0 21.4M 0 --:--:-- --:--:-- --:--:-- 106M ~/nightlyrpmaqODJO/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmaqODJO/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmaqODJO/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmaqODJO ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmaqODJO/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmaqODJO/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f943597aae9445ddac06cf876aee40ae -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.w0ckdmo2:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3201065213955384088.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done aa0dd743 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 238 | n47.dusty | 172.19.2.111 | dusty | 3656 | Deployed | aa0dd743 | None | None | 7 | x86_64 | 1 | 2460 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 9 00:41:01 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 9 Jun 2019 00:41:01 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #193 In-Reply-To: <862453831.296.1559954329106.JavaMail.jenkins@jenkins.ci.centos.org> References: <862453831.296.1559954329106.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1606114781.350.1560040861089.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.53 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 09 June 2019 01:40:17 +0100 (0:00:00.313) 0:03:04.226 *********** TASK [container-engine/docker : check length of search domains] **************** Sunday 09 June 2019 01:40:18 +0100 (0:00:00.453) 0:03:04.679 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 09 June 2019 01:40:18 +0100 (0:00:00.347) 0:03:05.026 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 09 June 2019 01:40:19 +0100 (0:00:00.318) 0:03:05.345 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 09 June 2019 01:40:19 +0100 (0:00:00.610) 0:03:05.955 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 09 June 2019 01:40:21 +0100 (0:00:01.458) 0:03:07.414 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 09 June 2019 01:40:21 +0100 (0:00:00.274) 0:03:07.688 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 09 June 2019 01:40:21 +0100 (0:00:00.257) 0:03:07.946 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 09 June 2019 01:40:22 +0100 (0:00:00.311) 0:03:08.258 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 09 June 2019 01:40:22 +0100 (0:00:00.312) 0:03:08.570 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 09 June 2019 01:40:22 +0100 (0:00:00.296) 0:03:08.867 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 09 June 2019 01:40:22 +0100 (0:00:00.298) 0:03:09.165 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 09 June 2019 01:40:23 +0100 (0:00:00.286) 0:03:09.452 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 09 June 2019 01:40:23 +0100 (0:00:00.289) 0:03:09.742 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 09 June 2019 01:40:23 +0100 (0:00:00.373) 0:03:10.115 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 09 June 2019 01:40:24 +0100 (0:00:00.342) 0:03:10.458 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 09 June 2019 01:40:24 +0100 (0:00:00.292) 0:03:10.751 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 09 June 2019 01:40:24 +0100 (0:00:00.298) 0:03:11.049 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 09 June 2019 01:40:25 +0100 (0:00:00.294) 0:03:11.344 *********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 09 June 2019 01:40:27 +0100 (0:00:02.183) 0:03:13.527 *********** ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 09 June 2019 01:40:28 +0100 (0:00:01.236) 0:03:14.764 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 09 June 2019 01:40:28 +0100 (0:00:00.288) 0:03:15.053 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 09 June 2019 01:40:29 +0100 (0:00:01.019) 0:03:16.072 *********** TASK [container-engine/docker : get systemd version] *************************** Sunday 09 June 2019 01:40:30 +0100 (0:00:00.350) 0:03:16.423 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 09 June 2019 01:40:30 +0100 (0:00:00.358) 0:03:16.781 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 09 June 2019 01:40:30 +0100 (0:00:00.310) 0:03:17.092 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 09 June 2019 01:40:32 +0100 (0:00:01.990) 0:03:19.083 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 09 June 2019 01:40:35 +0100 (0:00:02.159) 0:03:21.242 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 09 June 2019 01:40:35 +0100 (0:00:00.400) 0:03:21.643 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 09 June 2019 01:40:35 +0100 (0:00:00.262) 0:03:21.906 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 09 June 2019 01:40:36 +0100 (0:00:01.003) 0:03:22.910 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 09 June 2019 01:40:37 +0100 (0:00:01.221) 0:03:24.131 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 09 June 2019 01:40:38 +0100 (0:00:00.287) 0:03:24.419 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 09 June 2019 01:40:42 +0100 (0:00:04.153) 0:03:28.572 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 09 June 2019 01:40:52 +0100 (0:00:10.189) 0:03:38.762 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 09 June 2019 01:40:53 +0100 (0:00:01.381) 0:03:40.144 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 09 June 2019 01:40:55 +0100 (0:00:01.197) 0:03:41.341 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 09 June 2019 01:40:55 +0100 (0:00:00.517) 0:03:41.859 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 09 June 2019 01:40:56 +0100 (0:00:01.153) 0:03:43.012 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 09 June 2019 01:40:57 +0100 (0:00:00.909) 0:03:43.922 *********** TASK [download : Download items] *********************************************** Sunday 09 June 2019 01:40:57 +0100 (0:00:00.109) 0:03:44.032 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 09 June 2019 01:41:00 +0100 (0:00:02.865) 0:03:46.898 *********** =============================================================================== Install packages ------------------------------------------------------- 33.20s Wait for host to be available ------------------------------------------ 24.08s gather facts from all instances ---------------------------------------- 17.09s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.12s container-engine/docker : Docker | reload docker ------------------------ 4.15s kubernetes/preinstall : Create kubernetes directories ------------------- 4.00s download : Download items ----------------------------------------------- 2.87s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.80s Load required kernel modules -------------------------------------------- 2.66s kubernetes/preinstall : Create cni directories -------------------------- 2.66s Extend root VG ---------------------------------------------------------- 2.49s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.43s download : Sync container ----------------------------------------------- 2.29s container-engine/docker : ensure service is started if docker packages are already present --- 2.18s container-engine/docker : Write docker dns systemd drop-in -------------- 2.16s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.13s download : Download items ----------------------------------------------- 2.12s Gathering Facts --------------------------------------------------------- 2.09s kubernetes/preinstall : Set selinux policy ------------------------------ 2.07s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 9 01:12:35 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 9 Jun 2019 01:12:35 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #218 In-Reply-To: <1311242990.297.1559956758673.JavaMail.jenkins@jenkins.ci.centos.org> References: <1311242990.297.1559956758673.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1687307676.358.1560042756020.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 10 00:16:07 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 10 Jun 2019 00:16:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #390 In-Reply-To: <1845729718.349.1560039229034.JavaMail.jenkins@jenkins.ci.centos.org> References: <1845729718.349.1560039229034.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <494784789.391.1560125767413.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.59 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : nettle-2.7.1-8.el7.x86_64 36/52 Installing : zip-3.0-11.el7.x86_64 37/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 38/52 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 39/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 40/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 41/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 6/52 Verifying : zip-3.0-11.el7.x86_64 7/52 Verifying : python36-3.6.8-1.el7.x86_64 8/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 9/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 10/52 Verifying : nettle-2.7.1-8.el7.x86_64 11/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1818 0 --:--:-- --:--:-- --:--:-- 1827 100 8513k 100 8513k 0 0 10.5M 0 --:--:-- --:--:-- --:--:-- 10.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2207 0 --:--:-- --:--:-- --:--:-- 2207 100 38.3M 100 38.3M 0 0 46.4M 0 --:--:-- --:--:-- --:--:-- 46.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 564 0 --:--:-- --:--:-- --:--:-- 566 0 0 0 620 0 0 1588 0 --:--:-- --:--:-- --:--:-- 1588 100 10.7M 100 10.7M 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M ~/nightlyrpmxIaZiq/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmxIaZiq/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmxIaZiq/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmxIaZiq ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmxIaZiq/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmxIaZiq/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d60c459e853b45d9831f1a07f152b7b2 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.bcutmwg5:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5226334167144468819.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done fef4194e +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 116 | n52.pufty | 172.19.3.116 | pufty | 3660 | Deployed | fef4194e | None | None | 7 | x86_64 | 1 | 2510 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 10 00:40:50 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 10 Jun 2019 00:40:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #194 In-Reply-To: <1606114781.350.1560040861089.JavaMail.jenkins@jenkins.ci.centos.org> References: <1606114781.350.1560040861089.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <55654803.392.1560127250727.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.47 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 10 June 2019 01:40:07 +0100 (0:00:00.347) 0:02:57.957 *********** TASK [container-engine/docker : check length of search domains] **************** Monday 10 June 2019 01:40:07 +0100 (0:00:00.335) 0:02:58.293 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Monday 10 June 2019 01:40:08 +0100 (0:00:00.330) 0:02:58.624 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 10 June 2019 01:40:08 +0100 (0:00:00.299) 0:02:58.923 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 10 June 2019 01:40:09 +0100 (0:00:00.596) 0:02:59.520 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 10 June 2019 01:40:10 +0100 (0:00:01.352) 0:03:00.873 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 10 June 2019 01:40:10 +0100 (0:00:00.264) 0:03:01.137 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 10 June 2019 01:40:11 +0100 (0:00:00.255) 0:03:01.393 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 10 June 2019 01:40:11 +0100 (0:00:00.305) 0:03:01.699 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 10 June 2019 01:40:11 +0100 (0:00:00.311) 0:03:02.010 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 10 June 2019 01:40:12 +0100 (0:00:00.331) 0:03:02.342 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 10 June 2019 01:40:12 +0100 (0:00:00.399) 0:03:02.742 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 10 June 2019 01:40:12 +0100 (0:00:00.313) 0:03:03.055 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 10 June 2019 01:40:12 +0100 (0:00:00.272) 0:03:03.328 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 10 June 2019 01:40:13 +0100 (0:00:00.453) 0:03:03.781 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 10 June 2019 01:40:13 +0100 (0:00:00.392) 0:03:04.174 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 10 June 2019 01:40:14 +0100 (0:00:00.307) 0:03:04.482 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 10 June 2019 01:40:14 +0100 (0:00:00.282) 0:03:04.764 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 10 June 2019 01:40:14 +0100 (0:00:00.292) 0:03:05.057 *********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 10 June 2019 01:40:16 +0100 (0:00:02.144) 0:03:07.201 *********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 10 June 2019 01:40:18 +0100 (0:00:01.211) 0:03:08.413 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 10 June 2019 01:40:18 +0100 (0:00:00.284) 0:03:08.698 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 10 June 2019 01:40:19 +0100 (0:00:00.933) 0:03:09.632 *********** TASK [container-engine/docker : get systemd version] *************************** Monday 10 June 2019 01:40:19 +0100 (0:00:00.320) 0:03:09.953 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 10 June 2019 01:40:19 +0100 (0:00:00.298) 0:03:10.252 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 10 June 2019 01:40:20 +0100 (0:00:00.355) 0:03:10.607 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 10 June 2019 01:40:22 +0100 (0:00:02.122) 0:03:12.729 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 10 June 2019 01:40:24 +0100 (0:00:02.156) 0:03:14.886 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 10 June 2019 01:40:24 +0100 (0:00:00.321) 0:03:15.207 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 10 June 2019 01:40:25 +0100 (0:00:00.230) 0:03:15.437 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 10 June 2019 01:40:26 +0100 (0:00:00.996) 0:03:16.434 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 10 June 2019 01:40:27 +0100 (0:00:01.207) 0:03:17.642 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 10 June 2019 01:40:27 +0100 (0:00:00.361) 0:03:18.003 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 10 June 2019 01:40:31 +0100 (0:00:04.191) 0:03:22.194 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 10 June 2019 01:40:42 +0100 (0:00:10.191) 0:03:32.386 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 10 June 2019 01:40:43 +0100 (0:00:01.237) 0:03:33.624 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 10 June 2019 01:40:44 +0100 (0:00:01.342) 0:03:34.967 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 10 June 2019 01:40:45 +0100 (0:00:00.510) 0:03:35.478 *********** ok: [kube3] ok: [kube1] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 10 June 2019 01:40:46 +0100 (0:00:01.190) 0:03:36.668 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 10 June 2019 01:40:47 +0100 (0:00:01.095) 0:03:37.763 *********** TASK [download : Download items] *********************************************** Monday 10 June 2019 01:40:47 +0100 (0:00:00.137) 0:03:37.901 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 10 June 2019 01:40:50 +0100 (0:00:02.763) 0:03:40.665 *********** =============================================================================== Install packages ------------------------------------------------------- 33.14s Wait for host to be available ------------------------------------------ 21.69s gather facts from all instances ---------------------------------------- 17.03s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.32s container-engine/docker : Docker | reload docker ------------------------ 4.19s kubernetes/preinstall : Create kubernetes directories ------------------- 3.93s download : Download items ----------------------------------------------- 2.76s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.72s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.67s Load required kernel modules -------------------------------------------- 2.63s kubernetes/preinstall : Create cni directories -------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.39s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.31s Gathering Facts --------------------------------------------------------- 2.19s container-engine/docker : Write docker dns systemd drop-in -------------- 2.16s container-engine/docker : ensure service is started if docker packages are already present --- 2.14s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.13s container-engine/docker : Write docker options systemd drop-in ---------- 2.12s Extend the root LV and FS to occupy remaining space --------------------- 1.98s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 10 01:23:08 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 10 Jun 2019 01:23:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #219 In-Reply-To: <1687307676.358.1560042756020.JavaMail.jenkins@jenkins.ci.centos.org> References: <1687307676.358.1560042756020.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <570845441.395.1560129788426.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 11 00:16:07 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 11 Jun 2019 00:16:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #391 In-Reply-To: <494784789.391.1560125767413.JavaMail.jenkins@jenkins.ci.centos.org> References: <494784789.391.1560125767413.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2086369512.454.1560212167983.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.57 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1980 0 --:--:-- --:--:-- --:--:-- 1996 100 8513k 100 8513k 0 0 14.5M 0 --:--:-- --:--:-- --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2026 0 --:--:-- --:--:-- --:--:-- 2022 87 38.3M 87 33.4M 0 0 27.1M 0 0:00:01 0:00:01 --:--:-- 27.1M100 38.3M 100 38.3M 0 0 28.6M 0 0:00:01 0:00:01 --:--:-- 45.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 560 0 --:--:-- --:--:-- --:--:-- 562 0 0 0 620 0 0 1746 0 --:--:-- --:--:-- --:--:-- 1746 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 16.7M 0 --:--:-- --:--:-- --:--:-- 74.5M ~/nightlyrpmbMQvZq/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmbMQvZq/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmbMQvZq/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmbMQvZq ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmbMQvZq/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmbMQvZq/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fa74ca0f4f634ba984769af3b1aa8f93 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.3kmhr8vh:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1460269318469681302.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done cbb5ff72 +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 262 | n7.gusty | 172.19.2.135 | gusty | 3664 | Deployed | cbb5ff72 | None | None | 7 | x86_64 | 1 | 2060 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 11 00:38:49 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 11 Jun 2019 00:38:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #195 In-Reply-To: <55654803.392.1560127250727.JavaMail.jenkins@jenkins.ci.centos.org> References: <55654803.392.1560127250727.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <941762576.456.1560213529226.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.40 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 11 June 2019 01:38:23 +0100 (0:00:00.132) 0:01:58.471 ********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 11 June 2019 01:38:23 +0100 (0:00:00.130) 0:01:58.601 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 11 June 2019 01:38:23 +0100 (0:00:00.128) 0:01:58.730 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 11 June 2019 01:38:23 +0100 (0:00:00.123) 0:01:58.854 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 11 June 2019 01:38:23 +0100 (0:00:00.246) 0:01:59.101 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 11 June 2019 01:38:24 +0100 (0:00:00.622) 0:01:59.723 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 11 June 2019 01:38:24 +0100 (0:00:00.113) 0:01:59.837 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 11 June 2019 01:38:24 +0100 (0:00:00.113) 0:01:59.951 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 11 June 2019 01:38:24 +0100 (0:00:00.138) 0:02:00.089 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 11 June 2019 01:38:24 +0100 (0:00:00.142) 0:02:00.231 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.123) 0:02:00.354 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.125) 0:02:00.480 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.123) 0:02:00.604 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.126) 0:02:00.730 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.164) 0:02:00.895 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.154) 0:02:01.049 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 11 June 2019 01:38:25 +0100 (0:00:00.130) 0:02:01.180 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 11 June 2019 01:38:26 +0100 (0:00:00.129) 0:02:01.309 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 11 June 2019 01:38:26 +0100 (0:00:00.122) 0:02:01.432 ********** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 11 June 2019 01:38:27 +0100 (0:00:00.889) 0:02:02.321 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 11 June 2019 01:38:27 +0100 (0:00:00.553) 0:02:02.875 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 11 June 2019 01:38:27 +0100 (0:00:00.128) 0:02:03.004 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 11 June 2019 01:38:28 +0100 (0:00:00.459) 0:02:03.464 ********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 11 June 2019 01:38:28 +0100 (0:00:00.134) 0:02:03.598 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 11 June 2019 01:38:28 +0100 (0:00:00.145) 0:02:03.743 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 11 June 2019 01:38:28 +0100 (0:00:00.138) 0:02:03.882 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 11 June 2019 01:38:29 +0100 (0:00:00.904) 0:02:04.786 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 11 June 2019 01:38:30 +0100 (0:00:00.929) 0:02:05.716 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 11 June 2019 01:38:30 +0100 (0:00:00.144) 0:02:05.860 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 11 June 2019 01:38:30 +0100 (0:00:00.110) 0:02:05.970 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 11 June 2019 01:38:31 +0100 (0:00:00.420) 0:02:06.391 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 11 June 2019 01:38:31 +0100 (0:00:00.630) 0:02:07.022 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 11 June 2019 01:38:31 +0100 (0:00:00.128) 0:02:07.150 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 11 June 2019 01:38:34 +0100 (0:00:03.074) 0:02:10.224 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 11 June 2019 01:38:45 +0100 (0:00:10.094) 0:02:20.319 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 11 June 2019 01:38:45 +0100 (0:00:00.560) 0:02:20.879 ********** ok: [kube3] => (item=docker) ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 11 June 2019 01:38:46 +0100 (0:00:00.671) 0:02:21.550 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 11 June 2019 01:38:46 +0100 (0:00:00.212) 0:02:21.763 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 11 June 2019 01:38:47 +0100 (0:00:00.578) 0:02:22.342 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 11 June 2019 01:38:47 +0100 (0:00:00.468) 0:02:22.810 ********** TASK [download : Download items] *********************************************** Tuesday 11 June 2019 01:38:47 +0100 (0:00:00.061) 0:02:22.872 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 11 June 2019 01:38:48 +0100 (0:00:01.370) 0:02:24.242 ********** =============================================================================== Install packages ------------------------------------------------------- 25.64s Wait for host to be available ------------------------------------------ 16.16s Extend root VG --------------------------------------------------------- 13.03s gather facts from all instances ---------------------------------------- 10.75s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s Persist loaded modules -------------------------------------------------- 3.44s container-engine/docker : Docker | reload docker ------------------------ 3.07s kubernetes/preinstall : Create kubernetes directories ------------------- 1.86s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.76s Load required kernel modules -------------------------------------------- 1.64s kubernetes/preinstall : Enable ip forwarding ---------------------------- 1.64s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.46s Extend the root LV and FS to occupy remaining space --------------------- 1.46s download : Download items ----------------------------------------------- 1.37s Gathering Facts --------------------------------------------------------- 1.26s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.17s download : Download items ----------------------------------------------- 1.15s kubernetes/preinstall : Create cni directories -------------------------- 1.14s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.12s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From sankarshan.mukhopadhyay at gmail.com Tue Jun 11 01:00:56 2019 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Tue, 11 Jun 2019 06:30:56 +0530 Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #391 In-Reply-To: <2086369512.454.1560212167983.JavaMail.jenkins@jenkins.ci.centos.org> References: <494784789.391.1560125767413.JavaMail.jenkins@jenkins.ci.centos.org> <2086369512.454.1560212167983.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: On Tue, Jun 11, 2019 at 5:46 AM wrote: > > See > Do we need to continue with this job? From ci at centos.org Tue Jun 11 01:17:59 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 11 Jun 2019 01:17:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #220 In-Reply-To: <570845441.395.1560129788426.JavaMail.jenkins@jenkins.ci.centos.org> References: <570845441.395.1560129788426.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1101276905.459.1560215879146.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.40 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 12 00:16:08 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 12 Jun 2019 00:16:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #392 In-Reply-To: <2086369512.454.1560212167983.JavaMail.jenkins@jenkins.ci.centos.org> References: <2086369512.454.1560212167983.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2082072302.506.1560298568126.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1795 0 --:--:-- --:--:-- --:--:-- 1800 6 8513k 6 560k 0 0 880k 0 0:00:09 --:--:-- 0:00:09 880k100 8513k 100 8513k 0 0 10.1M 0 --:--:-- --:--:-- --:--:-- 43.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1918 0 --:--:-- --:--:-- --:--:-- 1923 41 38.3M 41 15.8M 0 0 19.7M 0 0:00:01 --:--:-- 0:00:01 19.7M100 38.3M 100 38.3M 0 0 33.7M 0 0:00:01 0:00:01 --:--:-- 67.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 572 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1668 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M ~/nightlyrpmiyvfBg/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmiyvfBg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmiyvfBg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmiyvfBg ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmiyvfBg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmiyvfBg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f889f6eda2674da08796f523e9d00e52 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ragluwwj:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5734245329774847399.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c98f44cf +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 140 | n13.crusty | 172.19.2.13 | crusty | 3614 | Deployed | c98f44cf | None | None | 7 | x86_64 | 1 | 2120 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 12 00:40:56 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 12 Jun 2019 00:40:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #196 In-Reply-To: <941762576.456.1560213529226.JavaMail.jenkins@jenkins.ci.centos.org> References: <941762576.456.1560213529226.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <580465693.507.1560300056084.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.55 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 12 June 2019 01:40:13 +0100 (0:00:00.308) 0:02:59.521 ******** TASK [container-engine/docker : check length of search domains] **************** Wednesday 12 June 2019 01:40:13 +0100 (0:00:00.297) 0:02:59.819 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 12 June 2019 01:40:14 +0100 (0:00:00.294) 0:03:00.114 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 12 June 2019 01:40:14 +0100 (0:00:00.283) 0:03:00.397 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 12 June 2019 01:40:14 +0100 (0:00:00.555) 0:03:00.952 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 12 June 2019 01:40:16 +0100 (0:00:01.356) 0:03:02.309 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 12 June 2019 01:40:16 +0100 (0:00:00.264) 0:03:02.573 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 12 June 2019 01:40:16 +0100 (0:00:00.256) 0:03:02.830 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 12 June 2019 01:40:17 +0100 (0:00:00.311) 0:03:03.141 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 12 June 2019 01:40:17 +0100 (0:00:00.319) 0:03:03.461 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 12 June 2019 01:40:17 +0100 (0:00:00.317) 0:03:03.779 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 12 June 2019 01:40:17 +0100 (0:00:00.287) 0:03:04.066 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 12 June 2019 01:40:18 +0100 (0:00:00.285) 0:03:04.352 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 12 June 2019 01:40:18 +0100 (0:00:00.332) 0:03:04.684 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 12 June 2019 01:40:18 +0100 (0:00:00.396) 0:03:05.080 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 12 June 2019 01:40:19 +0100 (0:00:00.349) 0:03:05.430 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 12 June 2019 01:40:19 +0100 (0:00:00.282) 0:03:05.713 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 12 June 2019 01:40:19 +0100 (0:00:00.339) 0:03:06.053 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 12 June 2019 01:40:20 +0100 (0:00:00.324) 0:03:06.377 ******** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 12 June 2019 01:40:22 +0100 (0:00:02.032) 0:03:08.410 ******** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 12 June 2019 01:40:23 +0100 (0:00:01.097) 0:03:09.508 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 12 June 2019 01:40:23 +0100 (0:00:00.292) 0:03:09.800 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 12 June 2019 01:40:24 +0100 (0:00:00.966) 0:03:10.767 ******** TASK [container-engine/docker : get systemd version] *************************** Wednesday 12 June 2019 01:40:24 +0100 (0:00:00.332) 0:03:11.099 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 12 June 2019 01:40:25 +0100 (0:00:00.300) 0:03:11.400 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 12 June 2019 01:40:25 +0100 (0:00:00.317) 0:03:11.718 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 12 June 2019 01:40:27 +0100 (0:00:02.044) 0:03:13.762 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 12 June 2019 01:40:29 +0100 (0:00:02.028) 0:03:15.790 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 12 June 2019 01:40:29 +0100 (0:00:00.312) 0:03:16.103 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 12 June 2019 01:40:30 +0100 (0:00:00.236) 0:03:16.340 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 12 June 2019 01:40:31 +0100 (0:00:00.881) 0:03:17.222 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 12 June 2019 01:40:32 +0100 (0:00:01.098) 0:03:18.321 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 12 June 2019 01:40:32 +0100 (0:00:00.351) 0:03:18.673 ******** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 12 June 2019 01:40:37 +0100 (0:00:04.478) 0:03:23.151 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 12 June 2019 01:40:47 +0100 (0:00:10.211) 0:03:33.362 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 12 June 2019 01:40:48 +0100 (0:00:01.229) 0:03:34.592 ******** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 12 June 2019 01:40:49 +0100 (0:00:01.413) 0:03:36.005 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 12 June 2019 01:40:50 +0100 (0:00:00.505) 0:03:36.511 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 12 June 2019 01:40:51 +0100 (0:00:01.268) 0:03:37.780 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 12 June 2019 01:40:52 +0100 (0:00:01.031) 0:03:38.811 ******** TASK [download : Download items] *********************************************** Wednesday 12 June 2019 01:40:52 +0100 (0:00:00.135) 0:03:38.947 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 12 June 2019 01:40:55 +0100 (0:00:02.804) 0:03:41.752 ******** =============================================================================== Install packages ------------------------------------------------------- 34.31s Wait for host to be available ------------------------------------------ 21.76s gather facts from all instances ---------------------------------------- 17.64s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 5.80s container-engine/docker : Docker | reload docker ------------------------ 4.48s kubernetes/preinstall : Create kubernetes directories ------------------- 4.07s download : Download items ----------------------------------------------- 2.81s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.62s Load required kernel modules -------------------------------------------- 2.62s kubernetes/preinstall : Create cni directories -------------------------- 2.59s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.57s Extend root VG ---------------------------------------------------------- 2.52s Gathering Facts --------------------------------------------------------- 2.38s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.30s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.13s container-engine/docker : Write docker options systemd drop-in ---------- 2.04s container-engine/docker : ensure service is started if docker packages are already present --- 2.03s container-engine/docker : Write docker dns systemd drop-in -------------- 2.03s download : Sync container ----------------------------------------------- 2.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 12 01:22:59 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 12 Jun 2019 01:22:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #221 In-Reply-To: <1101276905.459.1560215879146.JavaMail.jenkins@jenkins.ci.centos.org> References: <1101276905.459.1560215879146.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2056772925.510.1560302579508.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 13 00:16:09 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 13 Jun 2019 00:16:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #393 In-Reply-To: <2082072302.506.1560298568126.JavaMail.jenkins@jenkins.ci.centos.org> References: <2082072302.506.1560298568126.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <17317012.575.1560384969531.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.57 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1832 0 --:--:-- --:--:-- --:--:-- 1833 100 8513k 100 8513k 0 0 12.6M 0 --:--:-- --:--:-- --:--:-- 12.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2070 0 --:--:-- --:--:-- --:--:-- 2069 100 38.3M 100 38.3M 0 0 45.7M 0 --:--:-- --:--:-- --:--:-- 45.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 618 0 --:--:-- --:--:-- --:--:-- 619 0 0 0 620 0 0 1735 0 --:--:-- --:--:-- --:--:-- 1735 100 10.7M 100 10.7M 0 0 15.7M 0 --:--:-- --:--:-- --:--:-- 15.7M ~/nightlyrpmswmtq8/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmswmtq8/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmswmtq8/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmswmtq8 ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmswmtq8/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmswmtq8/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d0a8fceae4d3425b8980b5fdf043ca9b -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jz3tst9b:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5572075387726003245.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d2bd1595 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 158 | n31.crusty | 172.19.2.31 | crusty | 3672 | Deployed | d2bd1595 | None | None | 7 | x86_64 | 1 | 2300 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 13 00:40:59 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 13 Jun 2019 00:40:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #197 In-Reply-To: <580465693.507.1560300056084.JavaMail.jenkins@jenkins.ci.centos.org> References: <580465693.507.1560300056084.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <279799697.576.1560386459845.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.51 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 13 June 2019 01:40:17 +0100 (0:00:00.300) 0:03:02.621 ********* TASK [container-engine/docker : check length of search domains] **************** Thursday 13 June 2019 01:40:18 +0100 (0:00:00.350) 0:03:02.972 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 13 June 2019 01:40:18 +0100 (0:00:00.326) 0:03:03.298 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 13 June 2019 01:40:18 +0100 (0:00:00.278) 0:03:03.576 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 13 June 2019 01:40:19 +0100 (0:00:00.592) 0:03:04.169 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 13 June 2019 01:40:20 +0100 (0:00:01.290) 0:03:05.459 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 13 June 2019 01:40:20 +0100 (0:00:00.250) 0:03:05.709 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 13 June 2019 01:40:21 +0100 (0:00:00.256) 0:03:05.966 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 13 June 2019 01:40:21 +0100 (0:00:00.432) 0:03:06.399 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 13 June 2019 01:40:21 +0100 (0:00:00.311) 0:03:06.711 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 13 June 2019 01:40:22 +0100 (0:00:00.276) 0:03:06.987 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 13 June 2019 01:40:22 +0100 (0:00:00.270) 0:03:07.258 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 13 June 2019 01:40:22 +0100 (0:00:00.275) 0:03:07.533 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 13 June 2019 01:40:22 +0100 (0:00:00.280) 0:03:07.813 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 13 June 2019 01:40:23 +0100 (0:00:00.359) 0:03:08.173 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 13 June 2019 01:40:23 +0100 (0:00:00.348) 0:03:08.522 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 13 June 2019 01:40:23 +0100 (0:00:00.284) 0:03:08.806 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 13 June 2019 01:40:24 +0100 (0:00:00.285) 0:03:09.092 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 13 June 2019 01:40:24 +0100 (0:00:00.280) 0:03:09.373 ********* ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 13 June 2019 01:40:26 +0100 (0:00:01.949) 0:03:11.323 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 13 June 2019 01:40:27 +0100 (0:00:01.218) 0:03:12.541 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 13 June 2019 01:40:27 +0100 (0:00:00.287) 0:03:12.828 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 13 June 2019 01:40:28 +0100 (0:00:01.024) 0:03:13.853 ********* TASK [container-engine/docker : get systemd version] *************************** Thursday 13 June 2019 01:40:29 +0100 (0:00:00.309) 0:03:14.163 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 13 June 2019 01:40:29 +0100 (0:00:00.295) 0:03:14.459 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 13 June 2019 01:40:29 +0100 (0:00:00.354) 0:03:14.813 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 13 June 2019 01:40:32 +0100 (0:00:02.202) 0:03:17.016 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 13 June 2019 01:40:33 +0100 (0:00:01.871) 0:03:18.888 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 13 June 2019 01:40:34 +0100 (0:00:00.318) 0:03:19.206 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 13 June 2019 01:40:34 +0100 (0:00:00.230) 0:03:19.437 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 13 June 2019 01:40:35 +0100 (0:00:00.951) 0:03:20.388 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 13 June 2019 01:40:36 +0100 (0:00:01.153) 0:03:21.543 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 13 June 2019 01:40:36 +0100 (0:00:00.287) 0:03:21.831 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 13 June 2019 01:40:41 +0100 (0:00:04.271) 0:03:26.102 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 13 June 2019 01:40:51 +0100 (0:00:10.204) 0:03:36.306 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 13 June 2019 01:40:52 +0100 (0:00:01.218) 0:03:37.525 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 13 June 2019 01:40:53 +0100 (0:00:01.288) 0:03:38.813 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 13 June 2019 01:40:54 +0100 (0:00:00.524) 0:03:39.338 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 13 June 2019 01:40:55 +0100 (0:00:01.187) 0:03:40.525 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 13 June 2019 01:40:56 +0100 (0:00:00.981) 0:03:41.507 ********* TASK [download : Download items] *********************************************** Thursday 13 June 2019 01:40:56 +0100 (0:00:00.116) 0:03:41.623 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 13 June 2019 01:40:59 +0100 (0:00:02.725) 0:03:44.348 ********* =============================================================================== Install packages ------------------------------------------------------- 33.59s Wait for host to be available ------------------------------------------ 24.05s gather facts from all instances ---------------------------------------- 17.28s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 6.10s container-engine/docker : Docker | reload docker ------------------------ 4.27s kubernetes/preinstall : Create kubernetes directories ------------------- 4.06s download : Download items ----------------------------------------------- 2.73s Load required kernel modules -------------------------------------------- 2.69s kubernetes/preinstall : Create cni directories -------------------------- 2.68s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s Extend root VG ---------------------------------------------------------- 2.44s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.33s container-engine/docker : Write docker options systemd drop-in ---------- 2.20s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.19s Gathering Facts --------------------------------------------------------- 2.18s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.16s download : Sync container ----------------------------------------------- 2.07s kubernetes/preinstall : Set selinux policy ------------------------------ 2.03s download : Download items ----------------------------------------------- 2.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 13 01:19:36 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 13 Jun 2019 01:19:36 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #222 In-Reply-To: <2056772925.510.1560302579508.JavaMail.jenkins@jenkins.ci.centos.org> References: <2056772925.510.1560302579508.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1374903365.584.1560388776583.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.46 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 14 00:16:18 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 14 Jun 2019 00:16:18 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #394 In-Reply-To: <17317012.575.1560384969531.JavaMail.jenkins@jenkins.ci.centos.org> References: <17317012.575.1560384969531.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <493786817.629.1560471378149.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.59 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1569 0 --:--:-- --:--:-- --:--:-- 1575 100 8513k 100 8513k 0 0 12.2M 0 --:--:-- --:--:-- --:--:-- 12.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1970 0 --:--:-- --:--:-- --:--:-- 1971 100 38.3M 100 38.3M 0 0 35.2M 0 0:00:01 0:00:01 --:--:-- 35.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 545 0 --:--:-- --:--:-- --:--:-- 546 0 0 0 620 0 0 1626 0 --:--:-- --:--:-- --:--:-- 1626 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 16.5M 0 --:--:-- --:--:-- --:--:-- 80.0M ~/nightlyrpm7GHymh/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm7GHymh/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm7GHymh/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm7GHymh ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm7GHymh/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm7GHymh/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 7f6e81a0553e4368964074986cb320be -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.i6i08f9h:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4147380323905390145.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 059c2e27 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 91 | n27.pufty | 172.19.3.91 | pufty | 3676 | Deployed | 059c2e27 | None | None | 7 | x86_64 | 1 | 2260 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 14 00:40:50 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 14 Jun 2019 00:40:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #198 In-Reply-To: <279799697.576.1560386459845.JavaMail.jenkins@jenkins.ci.centos.org> References: <279799697.576.1560386459845.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <406973870.630.1560472850985.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.52 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 14 June 2019 01:40:08 +0100 (0:00:00.294) 0:03:00.451 *********** TASK [container-engine/docker : check length of search domains] **************** Friday 14 June 2019 01:40:09 +0100 (0:00:00.290) 0:03:00.742 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Friday 14 June 2019 01:40:09 +0100 (0:00:00.343) 0:03:01.086 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 14 June 2019 01:40:09 +0100 (0:00:00.294) 0:03:01.381 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 14 June 2019 01:40:10 +0100 (0:00:00.600) 0:03:01.981 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 14 June 2019 01:40:11 +0100 (0:00:01.270) 0:03:03.251 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 14 June 2019 01:40:11 +0100 (0:00:00.259) 0:03:03.511 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 14 June 2019 01:40:12 +0100 (0:00:00.249) 0:03:03.761 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 14 June 2019 01:40:12 +0100 (0:00:00.298) 0:03:04.059 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 14 June 2019 01:40:12 +0100 (0:00:00.296) 0:03:04.356 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 14 June 2019 01:40:12 +0100 (0:00:00.280) 0:03:04.637 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 14 June 2019 01:40:13 +0100 (0:00:00.276) 0:03:04.913 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 14 June 2019 01:40:13 +0100 (0:00:00.289) 0:03:05.203 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 14 June 2019 01:40:13 +0100 (0:00:00.284) 0:03:05.487 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 14 June 2019 01:40:14 +0100 (0:00:00.351) 0:03:05.839 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 14 June 2019 01:40:14 +0100 (0:00:00.350) 0:03:06.189 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 14 June 2019 01:40:14 +0100 (0:00:00.275) 0:03:06.465 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 14 June 2019 01:40:15 +0100 (0:00:00.285) 0:03:06.750 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 14 June 2019 01:40:15 +0100 (0:00:00.291) 0:03:07.042 *********** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 14 June 2019 01:40:17 +0100 (0:00:01.949) 0:03:08.992 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 14 June 2019 01:40:18 +0100 (0:00:01.144) 0:03:10.137 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 14 June 2019 01:40:18 +0100 (0:00:00.283) 0:03:10.420 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 14 June 2019 01:40:19 +0100 (0:00:01.126) 0:03:11.546 *********** TASK [container-engine/docker : get systemd version] *************************** Friday 14 June 2019 01:40:20 +0100 (0:00:00.333) 0:03:11.880 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 14 June 2019 01:40:20 +0100 (0:00:00.295) 0:03:12.176 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 14 June 2019 01:40:20 +0100 (0:00:00.300) 0:03:12.476 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 14 June 2019 01:40:22 +0100 (0:00:02.222) 0:03:14.699 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 14 June 2019 01:40:24 +0100 (0:00:01.963) 0:03:16.662 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 14 June 2019 01:40:25 +0100 (0:00:00.302) 0:03:16.965 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 14 June 2019 01:40:25 +0100 (0:00:00.228) 0:03:17.194 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 14 June 2019 01:40:26 +0100 (0:00:01.018) 0:03:18.212 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 14 June 2019 01:40:27 +0100 (0:00:01.220) 0:03:19.433 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 14 June 2019 01:40:28 +0100 (0:00:00.309) 0:03:19.743 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 14 June 2019 01:40:32 +0100 (0:00:04.266) 0:03:24.009 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 14 June 2019 01:40:42 +0100 (0:00:10.170) 0:03:34.180 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 14 June 2019 01:40:43 +0100 (0:00:01.215) 0:03:35.395 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 14 June 2019 01:40:44 +0100 (0:00:01.287) 0:03:36.683 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 14 June 2019 01:40:45 +0100 (0:00:00.508) 0:03:37.191 *********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 14 June 2019 01:40:46 +0100 (0:00:01.180) 0:03:38.372 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 14 June 2019 01:40:47 +0100 (0:00:01.039) 0:03:39.411 *********** TASK [download : Download items] *********************************************** Friday 14 June 2019 01:40:47 +0100 (0:00:00.137) 0:03:39.548 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 14 June 2019 01:40:50 +0100 (0:00:02.759) 0:03:42.308 *********** =============================================================================== Install packages ------------------------------------------------------- 35.46s Wait for host to be available ------------------------------------------ 21.69s gather facts from all instances ---------------------------------------- 16.70s container-engine/docker : Docker | pause while Docker restarts --------- 10.17s Persist loaded modules -------------------------------------------------- 5.79s container-engine/docker : Docker | reload docker ------------------------ 4.27s kubernetes/preinstall : Create kubernetes directories ------------------- 4.01s download : Download items ----------------------------------------------- 2.76s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.62s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.62s Load required kernel modules -------------------------------------------- 2.61s Extend root VG ---------------------------------------------------------- 2.56s Gathering Facts --------------------------------------------------------- 2.33s container-engine/docker : Write docker options systemd drop-in ---------- 2.22s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.22s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.16s download : Sync container ----------------------------------------------- 2.03s Extend the root LV and FS to occupy remaining space --------------------- 2.00s kubernetes/preinstall : Hosts | Update (if necessary) hosts file -------- 1.97s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 14 01:23:02 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 14 Jun 2019 01:23:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #223 In-Reply-To: <1374903365.584.1560388776583.JavaMail.jenkins@jenkins.ci.centos.org> References: <1374903365.584.1560388776583.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1779039821.632.1560475382961.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.46 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 15 00:13:51 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 15 Jun 2019 00:13:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #395 In-Reply-To: <493786817.629.1560471378149.JavaMail.jenkins@jenkins.ci.centos.org> References: <493786817.629.1560471378149.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <829005239.694.1560557631383.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.61 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2512 0 --:--:-- --:--:-- --:--:-- 2520 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 15.7M 0 --:--:-- --:--:-- --:--:-- 51.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2884 0 --:--:-- --:--:-- --:--:-- 2889 97 38.3M 97 37.3M 0 0 47.4M 0 --:--:-- --:--:-- --:--:-- 47.4M100 38.3M 100 38.3M 0 0 48.1M 0 --:--:-- --:--:-- --:--:-- 105M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 902 0 --:--:-- --:--:-- --:--:-- 900 0 0 0 620 0 0 2361 0 --:--:-- --:--:-- --:--:-- 2361 100 10.7M 100 10.7M 0 0 22.6M 0 --:--:-- --:--:-- --:--:-- 22.6M ~/nightlyrpmSVSaMV/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmSVSaMV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmSVSaMV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmSVSaMV ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmSVSaMV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmSVSaMV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 33 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 6f1cd16ead9746f888d04ad8e60f007e -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.8slr77b7:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1821331334634492798.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 05e37dba +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 211 | n20.dusty | 172.19.2.84 | dusty | 3681 | Deployed | 05e37dba | None | None | 7 | x86_64 | 1 | 2190 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 15 00:40:50 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 15 Jun 2019 00:40:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #199 In-Reply-To: <406973870.630.1560472850985.JavaMail.jenkins@jenkins.ci.centos.org> References: <406973870.630.1560472850985.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1405785421.695.1560559250685.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.48 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 15 June 2019 01:40:08 +0100 (0:00:00.293) 0:02:57.429 ********* TASK [container-engine/docker : check length of search domains] **************** Saturday 15 June 2019 01:40:08 +0100 (0:00:00.288) 0:02:57.717 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 15 June 2019 01:40:09 +0100 (0:00:00.296) 0:02:58.013 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 15 June 2019 01:40:09 +0100 (0:00:00.296) 0:02:58.310 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 15 June 2019 01:40:10 +0100 (0:00:00.583) 0:02:58.893 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 15 June 2019 01:40:11 +0100 (0:00:01.332) 0:03:00.226 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 15 June 2019 01:40:11 +0100 (0:00:00.252) 0:03:00.479 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 15 June 2019 01:40:11 +0100 (0:00:00.260) 0:03:00.739 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 15 June 2019 01:40:12 +0100 (0:00:00.318) 0:03:01.058 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 15 June 2019 01:40:12 +0100 (0:00:00.303) 0:03:01.362 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 15 June 2019 01:40:12 +0100 (0:00:00.284) 0:03:01.646 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 15 June 2019 01:40:13 +0100 (0:00:00.284) 0:03:01.930 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 15 June 2019 01:40:13 +0100 (0:00:00.273) 0:03:02.204 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 15 June 2019 01:40:13 +0100 (0:00:00.284) 0:03:02.488 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 15 June 2019 01:40:13 +0100 (0:00:00.353) 0:03:02.842 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 15 June 2019 01:40:14 +0100 (0:00:00.363) 0:03:03.206 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 15 June 2019 01:40:14 +0100 (0:00:00.276) 0:03:03.483 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 15 June 2019 01:40:14 +0100 (0:00:00.289) 0:03:03.773 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 15 June 2019 01:40:15 +0100 (0:00:00.282) 0:03:04.056 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 15 June 2019 01:40:17 +0100 (0:00:01.956) 0:03:06.012 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 15 June 2019 01:40:18 +0100 (0:00:01.084) 0:03:07.097 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 15 June 2019 01:40:18 +0100 (0:00:00.283) 0:03:07.381 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 15 June 2019 01:40:19 +0100 (0:00:01.035) 0:03:08.416 ********* TASK [container-engine/docker : get systemd version] *************************** Saturday 15 June 2019 01:40:19 +0100 (0:00:00.304) 0:03:08.721 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 15 June 2019 01:40:20 +0100 (0:00:00.306) 0:03:09.027 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 15 June 2019 01:40:20 +0100 (0:00:00.298) 0:03:09.326 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 15 June 2019 01:40:22 +0100 (0:00:02.160) 0:03:11.487 ********* changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 15 June 2019 01:40:24 +0100 (0:00:01.929) 0:03:13.417 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 15 June 2019 01:40:24 +0100 (0:00:00.350) 0:03:13.768 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 15 June 2019 01:40:25 +0100 (0:00:00.263) 0:03:14.032 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 15 June 2019 01:40:26 +0100 (0:00:01.028) 0:03:15.060 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 15 June 2019 01:40:27 +0100 (0:00:01.169) 0:03:16.230 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 15 June 2019 01:40:27 +0100 (0:00:00.284) 0:03:16.514 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 15 June 2019 01:40:31 +0100 (0:00:04.107) 0:03:20.621 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 15 June 2019 01:40:41 +0100 (0:00:10.234) 0:03:30.856 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 15 June 2019 01:40:43 +0100 (0:00:01.232) 0:03:32.089 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 15 June 2019 01:40:44 +0100 (0:00:01.290) 0:03:33.380 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 15 June 2019 01:40:44 +0100 (0:00:00.503) 0:03:33.883 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 15 June 2019 01:40:46 +0100 (0:00:01.233) 0:03:35.117 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 15 June 2019 01:40:47 +0100 (0:00:01.058) 0:03:36.175 ********* TASK [download : Download items] *********************************************** Saturday 15 June 2019 01:40:47 +0100 (0:00:00.157) 0:03:36.333 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 15 June 2019 01:40:50 +0100 (0:00:02.724) 0:03:39.058 ********* =============================================================================== Install packages ------------------------------------------------------- 33.30s Wait for host to be available ------------------------------------------ 21.83s gather facts from all instances ---------------------------------------- 16.14s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 6.22s container-engine/docker : Docker | reload docker ------------------------ 4.11s kubernetes/preinstall : Create kubernetes directories ------------------- 3.99s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.76s download : Download items ----------------------------------------------- 2.72s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.66s Load required kernel modules -------------------------------------------- 2.60s Extend root VG ---------------------------------------------------------- 2.51s kubernetes/preinstall : Create cni directories -------------------------- 2.44s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.19s container-engine/docker : Write docker options systemd drop-in ---------- 2.16s Gathering Facts --------------------------------------------------------- 2.13s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.02s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 1.98s container-engine/docker : ensure service is started if docker packages are already present --- 1.96s container-engine/docker : Write docker dns systemd drop-in -------------- 1.93s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 15 01:22:58 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 15 Jun 2019 01:22:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #224 In-Reply-To: <1779039821.632.1560475382961.JavaMail.jenkins@jenkins.ci.centos.org> References: <1779039821.632.1560475382961.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <996256968.700.1560561778967.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.40 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 16 00:13:51 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 16 Jun 2019 00:13:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #396 In-Reply-To: <829005239.694.1560557631383.JavaMail.jenkins@jenkins.ci.centos.org> References: <829005239.694.1560557631383.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <78304159.744.1560644031964.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.62 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2677 0 --:--:-- --:--:-- --:--:-- 2688 100 8513k 100 8513k 0 0 17.8M 0 --:--:-- --:--:-- --:--:-- 17.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3370 0 --:--:-- --:--:-- --:--:-- 3389 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 51.7M 0 --:--:-- --:--:-- --:--:-- 90.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1040 0 --:--:-- --:--:-- --:--:-- 1047 0 0 0 620 0 0 2518 0 --:--:-- --:--:-- --:--:-- 2518 100 10.7M 100 10.7M 0 0 19.7M 0 --:--:-- --:--:-- --:--:-- 19.7M ~/nightlyrpmoBjVKh/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmoBjVKh/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmoBjVKh/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmoBjVKh ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmoBjVKh/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmoBjVKh/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b3dd9b5fe10342a48bd99bb8467282ff -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.3dpojfwp:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1222483381280257706.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d6a33562 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 211 | n20.dusty | 172.19.2.84 | dusty | 3684 | Deployed | d6a33562 | None | None | 7 | x86_64 | 1 | 2190 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 16 00:40:48 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 16 Jun 2019 00:40:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #200 In-Reply-To: <1405785421.695.1560559250685.JavaMail.jenkins@jenkins.ci.centos.org> References: <1405785421.695.1560559250685.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <805880466.745.1560645648932.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.55 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 16 June 2019 01:40:06 +0100 (0:00:00.296) 0:03:01.613 *********** TASK [container-engine/docker : check length of search domains] **************** Sunday 16 June 2019 01:40:07 +0100 (0:00:00.294) 0:03:01.908 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 16 June 2019 01:40:07 +0100 (0:00:00.295) 0:03:02.203 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 16 June 2019 01:40:07 +0100 (0:00:00.283) 0:03:02.487 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 16 June 2019 01:40:08 +0100 (0:00:00.591) 0:03:03.078 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 16 June 2019 01:40:09 +0100 (0:00:01.287) 0:03:04.366 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 16 June 2019 01:40:09 +0100 (0:00:00.260) 0:03:04.627 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 16 June 2019 01:40:10 +0100 (0:00:00.259) 0:03:04.886 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 16 June 2019 01:40:10 +0100 (0:00:00.295) 0:03:05.182 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 16 June 2019 01:40:10 +0100 (0:00:00.309) 0:03:05.491 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 16 June 2019 01:40:11 +0100 (0:00:00.270) 0:03:05.761 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 16 June 2019 01:40:11 +0100 (0:00:00.284) 0:03:06.045 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 16 June 2019 01:40:11 +0100 (0:00:00.279) 0:03:06.324 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 16 June 2019 01:40:11 +0100 (0:00:00.279) 0:03:06.603 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 16 June 2019 01:40:12 +0100 (0:00:00.369) 0:03:06.973 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 16 June 2019 01:40:12 +0100 (0:00:00.353) 0:03:07.326 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 16 June 2019 01:40:12 +0100 (0:00:00.289) 0:03:07.616 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 16 June 2019 01:40:13 +0100 (0:00:00.284) 0:03:07.900 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 16 June 2019 01:40:13 +0100 (0:00:00.285) 0:03:08.186 *********** ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 16 June 2019 01:40:15 +0100 (0:00:01.948) 0:03:10.135 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 16 June 2019 01:40:16 +0100 (0:00:01.111) 0:03:11.246 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 16 June 2019 01:40:16 +0100 (0:00:00.284) 0:03:11.531 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 16 June 2019 01:40:17 +0100 (0:00:00.914) 0:03:12.446 *********** TASK [container-engine/docker : get systemd version] *************************** Sunday 16 June 2019 01:40:18 +0100 (0:00:00.354) 0:03:12.801 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 16 June 2019 01:40:18 +0100 (0:00:00.308) 0:03:13.109 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 16 June 2019 01:40:18 +0100 (0:00:00.300) 0:03:13.410 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 16 June 2019 01:40:20 +0100 (0:00:02.089) 0:03:15.500 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 16 June 2019 01:40:22 +0100 (0:00:02.088) 0:03:17.588 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 16 June 2019 01:40:23 +0100 (0:00:00.361) 0:03:17.950 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 16 June 2019 01:40:23 +0100 (0:00:00.231) 0:03:18.181 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 16 June 2019 01:40:24 +0100 (0:00:01.010) 0:03:19.192 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 16 June 2019 01:40:25 +0100 (0:00:01.116) 0:03:20.309 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 16 June 2019 01:40:25 +0100 (0:00:00.268) 0:03:20.578 *********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 16 June 2019 01:40:29 +0100 (0:00:04.140) 0:03:24.719 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 16 June 2019 01:40:40 +0100 (0:00:10.253) 0:03:34.973 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 16 June 2019 01:40:41 +0100 (0:00:01.339) 0:03:36.313 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 16 June 2019 01:40:42 +0100 (0:00:01.314) 0:03:37.627 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 16 June 2019 01:40:43 +0100 (0:00:00.523) 0:03:38.150 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 16 June 2019 01:40:44 +0100 (0:00:01.252) 0:03:39.403 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 16 June 2019 01:40:45 +0100 (0:00:00.997) 0:03:40.400 *********** TASK [download : Download items] *********************************************** Sunday 16 June 2019 01:40:45 +0100 (0:00:00.104) 0:03:40.505 *********** fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 16 June 2019 01:40:48 +0100 (0:00:02.692) 0:03:43.198 *********** =============================================================================== Install packages ------------------------------------------------------- 34.23s Wait for host to be available ------------------------------------------ 23.90s gather facts from all instances ---------------------------------------- 17.24s container-engine/docker : Docker | pause while Docker restarts --------- 10.25s Persist loaded modules -------------------------------------------------- 5.99s container-engine/docker : Docker | reload docker ------------------------ 4.14s kubernetes/preinstall : Create kubernetes directories ------------------- 3.99s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.71s download : Download items ----------------------------------------------- 2.69s kubernetes/preinstall : Create cni directories -------------------------- 2.62s Load required kernel modules -------------------------------------------- 2.56s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.42s Extend root VG ---------------------------------------------------------- 2.38s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.17s container-engine/docker : Write docker options systemd drop-in ---------- 2.09s container-engine/docker : Write docker dns systemd drop-in -------------- 2.09s bootstrap-os : Disable fastestmirror plugin ----------------------------- 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.02s download : Sync container ----------------------------------------------- 2.02s download : Download items ----------------------------------------------- 2.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 16 01:23:06 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 16 Jun 2019 01:23:06 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #225 In-Reply-To: <996256968.700.1560561778967.JavaMail.jenkins@jenkins.ci.centos.org> References: <996256968.700.1560561778967.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <365286279.747.1560648186203.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 17 00:16:02 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 17 Jun 2019 00:16:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #397 In-Reply-To: <78304159.744.1560644031964.JavaMail.jenkins@jenkins.ci.centos.org> References: <78304159.744.1560644031964.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <945529595.799.1560730562079.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.61 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1645 0 --:--:-- --:--:-- --:--:-- 1653 100 8513k 100 8513k 0 0 10.5M 0 --:--:-- --:--:-- --:--:-- 10.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1825 0 --:--:-- --:--:-- --:--:-- 1822 64 38.3M 64 24.8M 0 0 23.4M 0 0:00:01 0:00:01 --:--:-- 23.4M100 38.3M 100 38.3M 0 0 25.7M 0 0:00:01 0:00:01 --:--:-- 31.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 692 0 --:--:-- --:--:-- --:--:-- 692 0 0 0 620 0 0 1963 0 --:--:-- --:--:-- --:--:-- 1963 100 10.7M 100 10.7M 0 0 18.0M 0 --:--:-- --:--:-- --:--:-- 18.0M ~/nightlyrpmgvZYqH/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmgvZYqH/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmgvZYqH/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmgvZYqH ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmgvZYqH/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmgvZYqH/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 0629a04e59a74e4d84d9a2d2fbf37312 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.pk0728fk:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2186426512411694273.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 257e3df6 +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 136 | n9.crusty | 172.19.2.9 | crusty | 3688 | Deployed | 257e3df6 | None | None | 7 | x86_64 | 1 | 2080 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 17 00:40:44 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 17 Jun 2019 00:40:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #201 In-Reply-To: <805880466.745.1560645648932.JavaMail.jenkins@jenkins.ci.centos.org> References: <805880466.745.1560645648932.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <928466262.801.1560732044457.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.47 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 17 June 2019 01:40:02 +0100 (0:00:00.291) 0:02:56.117 *********** TASK [container-engine/docker : check length of search domains] **************** Monday 17 June 2019 01:40:02 +0100 (0:00:00.288) 0:02:56.406 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Monday 17 June 2019 01:40:02 +0100 (0:00:00.291) 0:02:56.697 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 17 June 2019 01:40:03 +0100 (0:00:00.286) 0:02:56.984 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 17 June 2019 01:40:03 +0100 (0:00:00.642) 0:02:57.627 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 17 June 2019 01:40:05 +0100 (0:00:01.391) 0:02:59.019 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 17 June 2019 01:40:05 +0100 (0:00:00.257) 0:02:59.277 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 17 June 2019 01:40:05 +0100 (0:00:00.253) 0:02:59.530 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 17 June 2019 01:40:06 +0100 (0:00:00.305) 0:02:59.836 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 17 June 2019 01:40:06 +0100 (0:00:00.302) 0:03:00.138 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 17 June 2019 01:40:06 +0100 (0:00:00.269) 0:03:00.408 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 17 June 2019 01:40:06 +0100 (0:00:00.281) 0:03:00.689 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 17 June 2019 01:40:07 +0100 (0:00:00.281) 0:03:00.970 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 17 June 2019 01:40:07 +0100 (0:00:00.274) 0:03:01.245 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 17 June 2019 01:40:07 +0100 (0:00:00.355) 0:03:01.601 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 17 June 2019 01:40:08 +0100 (0:00:00.349) 0:03:01.951 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 17 June 2019 01:40:08 +0100 (0:00:00.283) 0:03:02.234 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 17 June 2019 01:40:08 +0100 (0:00:00.274) 0:03:02.508 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 17 June 2019 01:40:09 +0100 (0:00:00.282) 0:03:02.791 *********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 17 June 2019 01:40:11 +0100 (0:00:02.087) 0:03:04.879 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 17 June 2019 01:40:12 +0100 (0:00:01.155) 0:03:06.034 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 17 June 2019 01:40:12 +0100 (0:00:00.276) 0:03:06.311 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 17 June 2019 01:40:13 +0100 (0:00:01.025) 0:03:07.336 *********** TASK [container-engine/docker : get systemd version] *************************** Monday 17 June 2019 01:40:13 +0100 (0:00:00.341) 0:03:07.678 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 17 June 2019 01:40:14 +0100 (0:00:00.296) 0:03:07.974 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 17 June 2019 01:40:14 +0100 (0:00:00.301) 0:03:08.276 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 17 June 2019 01:40:16 +0100 (0:00:02.169) 0:03:10.446 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 17 June 2019 01:40:18 +0100 (0:00:02.034) 0:03:12.481 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 17 June 2019 01:40:19 +0100 (0:00:00.364) 0:03:12.845 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 17 June 2019 01:40:19 +0100 (0:00:00.234) 0:03:13.079 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 17 June 2019 01:40:20 +0100 (0:00:00.899) 0:03:13.979 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 17 June 2019 01:40:21 +0100 (0:00:01.125) 0:03:15.104 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 17 June 2019 01:40:21 +0100 (0:00:00.296) 0:03:15.401 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 17 June 2019 01:40:25 +0100 (0:00:04.293) 0:03:19.695 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 17 June 2019 01:40:36 +0100 (0:00:10.210) 0:03:29.905 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 17 June 2019 01:40:37 +0100 (0:00:01.204) 0:03:31.110 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 17 June 2019 01:40:38 +0100 (0:00:01.231) 0:03:32.342 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 17 June 2019 01:40:39 +0100 (0:00:00.496) 0:03:32.838 *********** ok: [kube3] ok: [kube1] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 17 June 2019 01:40:40 +0100 (0:00:01.219) 0:03:34.058 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 17 June 2019 01:40:41 +0100 (0:00:00.958) 0:03:35.017 *********** TASK [download : Download items] *********************************************** Monday 17 June 2019 01:40:41 +0100 (0:00:00.132) 0:03:35.150 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 17 June 2019 01:40:44 +0100 (0:00:02.720) 0:03:37.871 *********** =============================================================================== Install packages ------------------------------------------------------- 33.18s Wait for host to be available ------------------------------------------ 21.75s gather facts from all instances ---------------------------------------- 16.56s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.30s container-engine/docker : Docker | reload docker ------------------------ 4.29s kubernetes/preinstall : Create kubernetes directories ------------------- 4.03s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.76s download : Download items ----------------------------------------------- 2.72s Load required kernel modules -------------------------------------------- 2.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.58s Extend root VG ---------------------------------------------------------- 2.49s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.39s kubernetes/preinstall : Create cni directories -------------------------- 2.37s Gathering Facts --------------------------------------------------------- 2.20s container-engine/docker : Write docker options systemd drop-in ---------- 2.17s container-engine/docker : ensure service is started if docker packages are already present --- 2.09s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.08s container-engine/docker : Write docker dns systemd drop-in -------------- 2.03s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.97s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 17 01:14:36 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 17 Jun 2019 01:14:36 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #226 In-Reply-To: <365286279.747.1560648186203.JavaMail.jenkins@jenkins.ci.centos.org> References: <365286279.747.1560648186203.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <444757037.805.1560734076059.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.42 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 18 00:16:05 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 18 Jun 2019 00:16:05 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #398 In-Reply-To: <945529595.799.1560730562079.JavaMail.jenkins@jenkins.ci.centos.org> References: <945529595.799.1560730562079.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <929176315.902.1560816965509.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.59 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 949 0 --:--:-- --:--:-- --:--:-- 949 100 8513k 100 8513k 0 0 8889k 0 --:--:-- --:--:-- --:--:-- 8889k Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2058 0 --:--:-- --:--:-- --:--:-- 2062 0 38.3M 0 254k 0 0 535k 0 0:01:13 --:--:-- 0:01:13 535k100 38.3M 100 38.3M 0 0 41.1M 0 --:--:-- --:--:-- --:--:-- 83.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 587 0 --:--:-- --:--:-- --:--:-- 586 0 0 0 620 0 0 1788 0 --:--:-- --:--:-- --:--:-- 1788 100 10.7M 100 10.7M 0 0 18.2M 0 --:--:-- --:--:-- --:--:-- 18.2M ~/nightlyrpmJNgHRO/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmJNgHRO/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmJNgHRO/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmJNgHRO ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmJNgHRO/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmJNgHRO/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b7106a223b874088935661be55921a52 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.asrvg364:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6835084513234524087.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 4e7a8509 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 123 | n59.pufty | 172.19.3.123 | pufty | 3692 | Deployed | 4e7a8509 | None | None | 7 | x86_64 | 1 | 2580 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 18 00:37:16 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 18 Jun 2019 00:37:16 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #202 In-Reply-To: <928466262.801.1560732044457.JavaMail.jenkins@jenkins.ci.centos.org> References: <928466262.801.1560732044457.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <822122687.903.1560818236987.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.39 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 18 June 2019 01:36:51 +0100 (0:00:00.130) 0:01:58.655 ********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 18 June 2019 01:36:51 +0100 (0:00:00.128) 0:01:58.784 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 18 June 2019 01:36:51 +0100 (0:00:00.127) 0:01:58.911 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 18 June 2019 01:36:51 +0100 (0:00:00.132) 0:01:59.043 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 18 June 2019 01:36:51 +0100 (0:00:00.248) 0:01:59.292 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 18 June 2019 01:36:52 +0100 (0:00:00.642) 0:01:59.934 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 18 June 2019 01:36:52 +0100 (0:00:00.111) 0:02:00.046 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 18 June 2019 01:36:52 +0100 (0:00:00.113) 0:02:00.159 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 18 June 2019 01:36:52 +0100 (0:00:00.144) 0:02:00.303 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 18 June 2019 01:36:52 +0100 (0:00:00.137) 0:02:00.441 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.126) 0:02:00.568 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.127) 0:02:00.695 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.126) 0:02:00.822 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.127) 0:02:00.950 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.156) 0:02:01.106 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.152) 0:02:01.258 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 18 June 2019 01:36:53 +0100 (0:00:00.122) 0:02:01.381 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 18 June 2019 01:36:54 +0100 (0:00:00.126) 0:02:01.508 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 18 June 2019 01:36:54 +0100 (0:00:00.127) 0:02:01.635 ********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 18 June 2019 01:36:55 +0100 (0:00:00.881) 0:02:02.517 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 18 June 2019 01:36:55 +0100 (0:00:00.521) 0:02:03.039 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 18 June 2019 01:36:55 +0100 (0:00:00.129) 0:02:03.169 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 18 June 2019 01:36:56 +0100 (0:00:00.442) 0:02:03.611 ********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 18 June 2019 01:36:56 +0100 (0:00:00.153) 0:02:03.765 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 18 June 2019 01:36:56 +0100 (0:00:00.146) 0:02:03.912 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 18 June 2019 01:36:56 +0100 (0:00:00.139) 0:02:04.052 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 18 June 2019 01:36:57 +0100 (0:00:00.976) 0:02:05.028 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 18 June 2019 01:36:58 +0100 (0:00:00.903) 0:02:05.932 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 18 June 2019 01:36:58 +0100 (0:00:00.149) 0:02:06.082 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 18 June 2019 01:36:58 +0100 (0:00:00.120) 0:02:06.202 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 18 June 2019 01:36:59 +0100 (0:00:00.431) 0:02:06.634 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 18 June 2019 01:36:59 +0100 (0:00:00.516) 0:02:07.150 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 18 June 2019 01:36:59 +0100 (0:00:00.128) 0:02:07.279 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 18 June 2019 01:37:02 +0100 (0:00:03.061) 0:02:10.340 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 18 June 2019 01:37:12 +0100 (0:00:10.090) 0:02:20.431 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 18 June 2019 01:37:13 +0100 (0:00:00.557) 0:02:20.989 ********** ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 18 June 2019 01:37:14 +0100 (0:00:00.593) 0:02:21.583 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 18 June 2019 01:37:14 +0100 (0:00:00.213) 0:02:21.796 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 18 June 2019 01:37:14 +0100 (0:00:00.519) 0:02:22.315 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 18 June 2019 01:37:15 +0100 (0:00:00.501) 0:02:22.817 ********** TASK [download : Download items] *********************************************** Tuesday 18 June 2019 01:37:15 +0100 (0:00:00.070) 0:02:22.887 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 18 June 2019 01:37:16 +0100 (0:00:01.332) 0:02:24.220 ********** =============================================================================== Install packages ------------------------------------------------------- 25.19s Wait for host to be available ------------------------------------------ 16.20s Extend root VG --------------------------------------------------------- 15.53s gather facts from all instances ---------------------------------------- 10.09s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s Persist loaded modules -------------------------------------------------- 3.31s container-engine/docker : Docker | reload docker ------------------------ 3.06s kubernetes/preinstall : Create kubernetes directories ------------------- 1.95s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.54s Extend the root LV and FS to occupy remaining space --------------------- 1.48s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.45s Load required kernel modules -------------------------------------------- 1.41s download : Download items ----------------------------------------------- 1.33s kubernetes/preinstall : Create cni directories -------------------------- 1.29s Gathering Facts --------------------------------------------------------- 1.14s download : Download items ----------------------------------------------- 1.14s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.13s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.11s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.04s download : Sync container ----------------------------------------------- 1.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 18 01:18:10 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 18 Jun 2019 01:18:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #227 In-Reply-To: <444757037.805.1560734076059.JavaMail.jenkins@jenkins.ci.centos.org> References: <444757037.805.1560734076059.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1997789458.906.1560820690662.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 19 00:16:07 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 19 Jun 2019 00:16:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #399 In-Reply-To: <929176315.902.1560816965509.JavaMail.jenkins@jenkins.ci.centos.org> References: <929176315.902.1560816965509.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1348150243.1056.1560903367988.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.58 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : kernel-headers-3.10.0-957.21.2.el7.x86_64 29/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 30/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 31/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 32/52 Installing : libmodman-2.0.1-8.el7.x86_64 33/52 Installing : libproxy-0.4.11-11.el7.x86_64 34/52 Installing : gdb-7.6.1-114.el7.x86_64 35/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 36/52 Installing : perl-srpm-macros-1-8.el7.noarch 37/52 Installing : pigz-2.3.4-1.el7.x86_64 38/52 Installing : golang-src-1.11.5-1.el7.noarch 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : golang-src-1.11.5-1.el7.noarch 12/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 13/52 Verifying : pigz-2.3.4-1.el7.x86_64 14/52 Verifying : perl-srpm-macros-1-8.el7.noarch 15/52 Verifying : golang-1.11.5-1.el7.x86_64 16/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 18/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/52 Verifying : gdb-7.6.1-114.el7.x86_64 20/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/52 Verifying : mock-1.4.16-1.el7.noarch 23/52 Verifying : libmodman-2.0.1-8.el7.x86_64 24/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 25/52 Verifying : mpfr-3.1.1-4.el7.x86_64 26/52 Verifying : python36-six-1.11.0-3.el7.noarch 27/52 Verifying : apr-util-1.5.2-6.el7.x86_64 28/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 29/52 Verifying : kernel-headers-3.10.0-957.21.2.el7.x86_64 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1822 0 --:--:-- --:--:-- --:--:-- 1827 100 8513k 100 8513k 0 0 10.4M 0 --:--:-- --:--:-- --:--:-- 10.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2020 0 --:--:-- --:--:-- --:--:-- 2022 23 38.3M 23 9145k 0 0 9249k 0 0:00:04 --:--:-- 0:00:04 9249k 77 38.3M 77 29.7M 0 0 14.9M 0 0:00:02 0:00:01 0:00:01 20.8M100 38.3M 100 38.3M 0 0 16.3M 0 0:00:02 0:00:02 --:--:-- 21.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 544 0 --:--:-- --:--:-- --:--:-- 546 0 0 0 620 0 0 1515 0 --:--:-- --:--:-- --:--:-- 1515 100 10.7M 100 10.7M 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M ~/nightlyrpmX7OUdV/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmX7OUdV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmX7OUdV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmX7OUdV ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmX7OUdV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmX7OUdV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 328a77c6fd624e5d85980fd68313b104 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.xgr4mrtz:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6768915811149669622.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b033e426 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 120 | n56.pufty | 172.19.3.120 | pufty | 3700 | Deployed | b033e426 | None | None | 7 | x86_64 | 1 | 2550 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 19 00:40:56 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 19 Jun 2019 00:40:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #203 In-Reply-To: <822122687.903.1560818236987.JavaMail.jenkins@jenkins.ci.centos.org> References: <822122687.903.1560818236987.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1440591987.1059.1560904856072.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.51 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 19 June 2019 01:40:14 +0100 (0:00:00.358) 0:02:59.171 ******** TASK [container-engine/docker : check length of search domains] **************** Wednesday 19 June 2019 01:40:14 +0100 (0:00:00.307) 0:02:59.478 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 19 June 2019 01:40:14 +0100 (0:00:00.290) 0:02:59.769 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 19 June 2019 01:40:15 +0100 (0:00:00.286) 0:03:00.055 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 19 June 2019 01:40:15 +0100 (0:00:00.595) 0:03:00.650 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 19 June 2019 01:40:17 +0100 (0:00:01.346) 0:03:01.997 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 19 June 2019 01:40:17 +0100 (0:00:00.254) 0:03:02.252 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 19 June 2019 01:40:17 +0100 (0:00:00.249) 0:03:02.502 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 19 June 2019 01:40:17 +0100 (0:00:00.305) 0:03:02.807 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 19 June 2019 01:40:18 +0100 (0:00:00.407) 0:03:03.215 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 19 June 2019 01:40:18 +0100 (0:00:00.302) 0:03:03.518 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 19 June 2019 01:40:18 +0100 (0:00:00.294) 0:03:03.813 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 19 June 2019 01:40:19 +0100 (0:00:00.276) 0:03:04.089 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 19 June 2019 01:40:19 +0100 (0:00:00.286) 0:03:04.376 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 19 June 2019 01:40:19 +0100 (0:00:00.373) 0:03:04.749 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 19 June 2019 01:40:20 +0100 (0:00:00.350) 0:03:05.099 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 19 June 2019 01:40:20 +0100 (0:00:00.287) 0:03:05.387 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 19 June 2019 01:40:20 +0100 (0:00:00.279) 0:03:05.667 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 19 June 2019 01:40:21 +0100 (0:00:00.279) 0:03:05.946 ******** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 19 June 2019 01:40:22 +0100 (0:00:01.850) 0:03:07.797 ******** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 19 June 2019 01:40:24 +0100 (0:00:01.076) 0:03:08.874 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 19 June 2019 01:40:24 +0100 (0:00:00.282) 0:03:09.156 ******** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 19 June 2019 01:40:25 +0100 (0:00:00.976) 0:03:10.133 ******** TASK [container-engine/docker : get systemd version] *************************** Wednesday 19 June 2019 01:40:25 +0100 (0:00:00.302) 0:03:10.435 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 19 June 2019 01:40:25 +0100 (0:00:00.295) 0:03:10.731 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 19 June 2019 01:40:26 +0100 (0:00:00.294) 0:03:11.026 ******** changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 19 June 2019 01:40:28 +0100 (0:00:01.956) 0:03:12.982 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 19 June 2019 01:40:30 +0100 (0:00:02.079) 0:03:15.062 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 19 June 2019 01:40:30 +0100 (0:00:00.348) 0:03:15.411 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 19 June 2019 01:40:30 +0100 (0:00:00.242) 0:03:15.653 ******** changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 19 June 2019 01:40:31 +0100 (0:00:01.001) 0:03:16.655 ******** changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 19 June 2019 01:40:33 +0100 (0:00:01.199) 0:03:17.854 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 19 June 2019 01:40:33 +0100 (0:00:00.327) 0:03:18.182 ******** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 19 June 2019 01:40:37 +0100 (0:00:04.159) 0:03:22.342 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 19 June 2019 01:40:47 +0100 (0:00:10.228) 0:03:32.570 ******** changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 19 June 2019 01:40:48 +0100 (0:00:01.258) 0:03:33.829 ******** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 19 June 2019 01:40:50 +0100 (0:00:01.169) 0:03:34.998 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 19 June 2019 01:40:50 +0100 (0:00:00.503) 0:03:35.502 ******** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 19 June 2019 01:40:51 +0100 (0:00:01.155) 0:03:36.657 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 19 June 2019 01:40:52 +0100 (0:00:01.007) 0:03:37.664 ******** TASK [download : Download items] *********************************************** Wednesday 19 June 2019 01:40:52 +0100 (0:00:00.160) 0:03:37.825 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 19 June 2019 01:40:55 +0100 (0:00:02.699) 0:03:40.525 ******** =============================================================================== Install packages ------------------------------------------------------- 33.40s Wait for host to be available ------------------------------------------ 21.72s gather facts from all instances ---------------------------------------- 16.91s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 6.09s container-engine/docker : Docker | reload docker ------------------------ 4.16s kubernetes/preinstall : Create kubernetes directories ------------------- 4.15s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.73s download : Download items ----------------------------------------------- 2.70s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.68s kubernetes/preinstall : Create cni directories -------------------------- 2.67s Load required kernel modules -------------------------------------------- 2.59s Extend root VG ---------------------------------------------------------- 2.38s Gathering Facts --------------------------------------------------------- 2.19s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.18s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.15s container-engine/docker : Write docker dns systemd drop-in -------------- 2.08s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.05s kubernetes/preinstall : Set selinux policy ------------------------------ 2.00s container-engine/docker : Write docker options systemd drop-in ---------- 1.96s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 19 01:11:51 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 19 Jun 2019 01:11:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #228 In-Reply-To: <1997789458.906.1560820690662.JavaMail.jenkins@jenkins.ci.centos.org> References: <1997789458.906.1560820690662.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1790710059.1065.1560906711638.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.42 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 20 00:14:38 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 20 Jun 2019 00:14:38 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #400 In-Reply-To: <1348150243.1056.1560903367988.JavaMail.jenkins@jenkins.ci.centos.org> References: <1348150243.1056.1560903367988.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <196961161.1253.1560989678082.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ Started by timer [EnvInject] - Loading node environment variables. Building remotely on gluster-ci-slave07 (gluster) in workspace No credentials specified Wiping out workspace first. Cloning the remote Git repository Cloning repository https://github.com/gluster/centosci.git > git init # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git --version # timeout=10 > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url https://github.com/gluster/centosci.git # timeout=10 Fetching upstream changes from https://github.com/gluster/centosci.git > git fetch --tags --progress https://github.com/gluster/centosci.git +refs/heads/*:refs/remotes/origin/* > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 9591b1182a36ff70283286f45b814ac27b0db18b (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 9591b1182a36ff70283286f45b814ac27b0db18b Commit message: "Fix the typo" > git rev-list --no-walk 9591b1182a36ff70283286f45b814ac27b0db18b # timeout=10 [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6098468443527107902.sh + set +x [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7863372309604745384.sh + set +x Pseudo-terminal will not be allocated because stdin is not a terminal. Warning: Permanently added '172.19.3.108' (ECDSA) to the list of known hosts. Pseudo-terminal will not be allocated because stdin is not a terminal. Warning: Permanently added '172.19.3.108' (ECDSA) to the list of known hosts. [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1764867521404725078.sh + jobs/scripts/common/bootstrap.sh ++ basename + EXEC_BIN=build.sh ++ cat + scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root at 172.19.3.108:build.sh Warning: Permanently added '172.19.3.108' (ECDSA) to the list of known hosts. + '[' -z ']' ++ cat + ssh -t -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root at 172.19.3.108 ./build.sh Pseudo-terminal will not be allocated because stdin is not a terminal. Warning: Permanently added '172.19.3.108' (ECDSA) to the list of known hosts. Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror3.ci.centos.org * centos-gluster6: mirror3.ci.centos.org * centos-sclo-rh: mirror3.ci.centos.org * centos-sclo-sclo: mirror3.ci.centos.org * epel: mirror.ci.centos.org * extras: mirror3.ci.centos.org * updates: mirror3.ci.centos.org Package git-1.8.3.1-20.el7.x86_64 already installed and latest version Package createrepo_c-0.10.0-18.el7.x86_64 already installed and latest version Nothing to do Cloning into '/root/glusterd2'... ~/glusterd2 ~ Already on 'master' ~ Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror3.ci.centos.org * centos-gluster6: mirror3.ci.centos.org * centos-sclo-rh: mirror3.ci.centos.org * centos-sclo-sclo: mirror3.ci.centos.org * epel: mirror.ci.centos.org * extras: mirror3.ci.centos.org * updates: mirror3.ci.centos.org Package epel-release-7-11.noarch already installed and latest version Nothing to do Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror3.ci.centos.org * centos-gluster6: mirror3.ci.centos.org * centos-sclo-rh: mirror3.ci.centos.org * centos-sclo-sclo: mirror3.ci.centos.org * epel: mirror.ci.centos.org * extras: mirror3.ci.centos.org * updates: mirror3.ci.centos.org Package 1:make-3.82-23.el7.x86_64 already installed and latest version Package mock-1.4.16-1.el7.noarch already installed and latest version Package rpm-build-4.11.3-35.el7.x86_64 already installed and latest version Package golang-1.11.5-1.el7.x86_64 already installed and latest version Nothing to do LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2013 0 --:--:-- --:--:-- --:--:-- 2009 100 8513k 100 8513k 0 0 13.6M 0 --:--:-- --:--:-- --:--:-- 13.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1977 0 --:--:-- --:--:-- --:--:-- 1984 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 37.5M 0 0:00:01 0:00:01 --:--:-- 76.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 575 0 --:--:-- --:--:-- --:--:-- 577 0 0 0 620 0 0 1655 0 --:--:-- --:--:-- --:--:-- 1655 100 10.7M 100 10.7M 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M ~/nightlyrpm6C6nXL/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm6C6nXL/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm6C6nXL/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm6C6nXL ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: tmpfs initialized INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm6C6nXL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: mounting tmpfs at /var/lib/mock/epel-7-x86_64/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm6C6nXL/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 12 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 5ffbadec95e4464dac50a8eb93fd3d6e -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.4wky7tiu:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3231328049392371145.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 454294ba +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 108 | n44.pufty | 172.19.3.108 | pufty | 3639 | Deployed | 454294ba | None | None | 7 | x86_64 | 1 | 2430 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 20 00:40:47 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 20 Jun 2019 00:40:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #204 In-Reply-To: <1440591987.1059.1560904856072.JavaMail.jenkins@jenkins.ci.centos.org> References: <1440591987.1059.1560904856072.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1310521878.1257.1560991247294.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 266.99 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 20 June 2019 01:40:04 +0100 (0:00:00.313) 0:03:01.516 ********* TASK [container-engine/docker : check length of search domains] **************** Thursday 20 June 2019 01:40:04 +0100 (0:00:00.298) 0:03:01.814 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 20 June 2019 01:40:05 +0100 (0:00:00.353) 0:03:02.168 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 20 June 2019 01:40:05 +0100 (0:00:00.283) 0:03:02.451 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 20 June 2019 01:40:06 +0100 (0:00:00.652) 0:03:03.103 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 20 June 2019 01:40:07 +0100 (0:00:01.412) 0:03:04.516 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 20 June 2019 01:40:07 +0100 (0:00:00.273) 0:03:04.790 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 20 June 2019 01:40:08 +0100 (0:00:00.254) 0:03:05.044 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 20 June 2019 01:40:08 +0100 (0:00:00.310) 0:03:05.355 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 20 June 2019 01:40:08 +0100 (0:00:00.302) 0:03:05.657 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 20 June 2019 01:40:08 +0100 (0:00:00.288) 0:03:05.946 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 20 June 2019 01:40:09 +0100 (0:00:00.287) 0:03:06.233 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 20 June 2019 01:40:09 +0100 (0:00:00.295) 0:03:06.529 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 20 June 2019 01:40:09 +0100 (0:00:00.289) 0:03:06.818 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 20 June 2019 01:40:10 +0100 (0:00:00.351) 0:03:07.170 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 20 June 2019 01:40:10 +0100 (0:00:00.337) 0:03:07.507 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 20 June 2019 01:40:10 +0100 (0:00:00.280) 0:03:07.788 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 20 June 2019 01:40:11 +0100 (0:00:00.282) 0:03:08.070 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 20 June 2019 01:40:11 +0100 (0:00:00.282) 0:03:08.353 ********* ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 20 June 2019 01:40:13 +0100 (0:00:02.003) 0:03:10.357 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 20 June 2019 01:40:14 +0100 (0:00:01.215) 0:03:11.572 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 20 June 2019 01:40:14 +0100 (0:00:00.307) 0:03:11.880 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 20 June 2019 01:40:16 +0100 (0:00:01.117) 0:03:12.997 ********* TASK [container-engine/docker : get systemd version] *************************** Thursday 20 June 2019 01:40:16 +0100 (0:00:00.324) 0:03:13.321 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 20 June 2019 01:40:16 +0100 (0:00:00.305) 0:03:13.626 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 20 June 2019 01:40:16 +0100 (0:00:00.300) 0:03:13.927 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 20 June 2019 01:40:19 +0100 (0:00:02.081) 0:03:16.009 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 20 June 2019 01:40:21 +0100 (0:00:02.126) 0:03:18.135 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 20 June 2019 01:40:21 +0100 (0:00:00.369) 0:03:18.504 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 20 June 2019 01:40:21 +0100 (0:00:00.237) 0:03:18.742 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 20 June 2019 01:40:22 +0100 (0:00:00.941) 0:03:19.684 ********* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 20 June 2019 01:40:23 +0100 (0:00:01.080) 0:03:20.764 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 20 June 2019 01:40:24 +0100 (0:00:00.278) 0:03:21.042 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 20 June 2019 01:40:28 +0100 (0:00:04.381) 0:03:25.424 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 20 June 2019 01:40:38 +0100 (0:00:10.241) 0:03:35.665 ********* changed: [kube2] changed: [kube3] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 20 June 2019 01:40:39 +0100 (0:00:01.249) 0:03:36.914 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 20 June 2019 01:40:41 +0100 (0:00:01.304) 0:03:38.219 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 20 June 2019 01:40:41 +0100 (0:00:00.521) 0:03:38.740 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 20 June 2019 01:40:43 +0100 (0:00:01.308) 0:03:40.048 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 20 June 2019 01:40:44 +0100 (0:00:01.011) 0:03:41.060 ********* TASK [download : Download items] *********************************************** Thursday 20 June 2019 01:40:44 +0100 (0:00:00.130) 0:03:41.191 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 20 June 2019 01:40:46 +0100 (0:00:02.694) 0:03:43.885 ********* =============================================================================== Install packages ------------------------------------------------------- 33.90s Wait for host to be available ------------------------------------------ 24.00s gather facts from all instances ---------------------------------------- 16.90s container-engine/docker : Docker | pause while Docker restarts --------- 10.24s Persist loaded modules -------------------------------------------------- 5.99s container-engine/docker : Docker | reload docker ------------------------ 4.38s kubernetes/preinstall : Create kubernetes directories ------------------- 3.91s download : Download items ----------------------------------------------- 2.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.69s Load required kernel modules -------------------------------------------- 2.64s kubernetes/preinstall : Create cni directories -------------------------- 2.61s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.57s Extend root VG ---------------------------------------------------------- 2.42s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.24s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.16s container-engine/docker : Write docker dns systemd drop-in -------------- 2.13s container-engine/docker : Write docker options systemd drop-in ---------- 2.08s download : Sync container ----------------------------------------------- 2.03s download : Download items ----------------------------------------------- 2.01s container-engine/docker : ensure service is started if docker packages are already present --- 2.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 20 01:17:46 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 20 Jun 2019 01:17:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #229 In-Reply-To: <1790710059.1065.1560906711638.JavaMail.jenkins@jenkins.ci.centos.org> References: <1790710059.1065.1560906711638.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <175369557.1264.1560993466127.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.42 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 21 00:16:21 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 21 Jun 2019 00:16:21 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #401 In-Reply-To: <196961161.1253.1560989678082.JavaMail.jenkins@jenkins.ci.centos.org> References: <196961161.1253.1560989678082.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1593419348.1527.1561076181820.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.62 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1918 0 --:--:-- --:--:-- --:--:-- 1926 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 34.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2123 0 --:--:-- --:--:-- --:--:-- 2132 15 38.3M 15 5932k 0 0 8018k 0 0:00:04 --:--:-- 0:00:04 8018k100 38.3M 100 38.3M 0 0 35.1M 0 0:00:01 0:00:01 --:--:-- 92.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 576 0 --:--:-- --:--:-- --:--:-- 575 0 0 0 620 0 0 1739 0 --:--:-- --:--:-- --:--:-- 1739 100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.5M ~/nightlyrpmtO7KAU/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmtO7KAU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmtO7KAU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmtO7KAU ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmtO7KAU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmtO7KAU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M dd551155bc7346208f199a51cedfe08f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.83rtomvn:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2769195164359998507.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 027451be +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 74 | n10.pufty | 172.19.3.74 | pufty | 3677 | Deployed | 027451be | None | None | 7 | x86_64 | 1 | 2090 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 21 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 21 Jun 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #205 In-Reply-To: <1310521878.1257.1560991247294.JavaMail.jenkins@jenkins.ci.centos.org> References: <1310521878.1257.1560991247294.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1211203994.1532.1561077654725.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.60 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 21 June 2019 01:40:12 +0100 (0:00:00.288) 0:02:59.071 *********** TASK [container-engine/docker : check length of search domains] **************** Friday 21 June 2019 01:40:12 +0100 (0:00:00.296) 0:02:59.368 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Friday 21 June 2019 01:40:12 +0100 (0:00:00.286) 0:02:59.655 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 21 June 2019 01:40:12 +0100 (0:00:00.285) 0:02:59.940 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 21 June 2019 01:40:13 +0100 (0:00:00.607) 0:03:00.547 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 21 June 2019 01:40:14 +0100 (0:00:01.342) 0:03:01.890 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 21 June 2019 01:40:15 +0100 (0:00:00.258) 0:03:02.148 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 21 June 2019 01:40:15 +0100 (0:00:00.278) 0:03:02.427 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 21 June 2019 01:40:15 +0100 (0:00:00.344) 0:03:02.772 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 21 June 2019 01:40:16 +0100 (0:00:00.321) 0:03:03.094 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 21 June 2019 01:40:16 +0100 (0:00:00.314) 0:03:03.408 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 21 June 2019 01:40:16 +0100 (0:00:00.275) 0:03:03.683 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 21 June 2019 01:40:16 +0100 (0:00:00.279) 0:03:03.963 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 21 June 2019 01:40:17 +0100 (0:00:00.282) 0:03:04.245 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 21 June 2019 01:40:17 +0100 (0:00:00.401) 0:03:04.647 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 21 June 2019 01:40:17 +0100 (0:00:00.344) 0:03:04.991 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 21 June 2019 01:40:18 +0100 (0:00:00.288) 0:03:05.279 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 21 June 2019 01:40:18 +0100 (0:00:00.275) 0:03:05.555 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 21 June 2019 01:40:18 +0100 (0:00:00.284) 0:03:05.840 *********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 21 June 2019 01:40:20 +0100 (0:00:01.879) 0:03:07.719 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 21 June 2019 01:40:21 +0100 (0:00:01.193) 0:03:08.913 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 21 June 2019 01:40:22 +0100 (0:00:00.293) 0:03:09.206 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 21 June 2019 01:40:23 +0100 (0:00:01.125) 0:03:10.332 *********** TASK [container-engine/docker : get systemd version] *************************** Friday 21 June 2019 01:40:23 +0100 (0:00:00.315) 0:03:10.648 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 21 June 2019 01:40:23 +0100 (0:00:00.346) 0:03:10.994 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 21 June 2019 01:40:24 +0100 (0:00:00.355) 0:03:11.350 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 21 June 2019 01:40:26 +0100 (0:00:02.158) 0:03:13.509 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 21 June 2019 01:40:28 +0100 (0:00:02.191) 0:03:15.700 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 21 June 2019 01:40:28 +0100 (0:00:00.369) 0:03:16.070 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 21 June 2019 01:40:29 +0100 (0:00:00.234) 0:03:16.304 *********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 21 June 2019 01:40:30 +0100 (0:00:01.031) 0:03:17.336 *********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 21 June 2019 01:40:31 +0100 (0:00:01.218) 0:03:18.554 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 21 June 2019 01:40:31 +0100 (0:00:00.324) 0:03:18.879 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 21 June 2019 01:40:35 +0100 (0:00:04.121) 0:03:23.000 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 21 June 2019 01:40:46 +0100 (0:00:10.174) 0:03:33.175 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 21 June 2019 01:40:47 +0100 (0:00:01.246) 0:03:34.422 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 21 June 2019 01:40:48 +0100 (0:00:01.258) 0:03:35.681 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 21 June 2019 01:40:49 +0100 (0:00:00.550) 0:03:36.232 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 21 June 2019 01:40:50 +0100 (0:00:01.242) 0:03:37.474 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 21 June 2019 01:40:51 +0100 (0:00:01.045) 0:03:38.520 *********** TASK [download : Download items] *********************************************** Friday 21 June 2019 01:40:51 +0100 (0:00:00.133) 0:03:38.653 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 21 June 2019 01:40:54 +0100 (0:00:02.746) 0:03:41.400 *********** =============================================================================== Install packages ------------------------------------------------------- 34.07s Wait for host to be available ------------------------------------------ 21.46s gather facts from all instances ---------------------------------------- 16.58s container-engine/docker : Docker | pause while Docker restarts --------- 10.18s Persist loaded modules -------------------------------------------------- 6.14s container-engine/docker : Docker | reload docker ------------------------ 4.12s kubernetes/preinstall : Create kubernetes directories ------------------- 3.83s Load required kernel modules -------------------------------------------- 2.75s download : Download items ----------------------------------------------- 2.75s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.70s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.57s kubernetes/preinstall : Create cni directories -------------------------- 2.47s Extend root VG ---------------------------------------------------------- 2.35s Gathering Facts --------------------------------------------------------- 2.34s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.31s container-engine/docker : Write docker dns systemd drop-in -------------- 2.19s container-engine/docker : Write docker options systemd drop-in ---------- 2.16s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.08s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.04s download : Download items ----------------------------------------------- 2.03s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 21 00:53:33 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 21 Jun 2019 00:53:33 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10422 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <1653072105.1536.1561078414321.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10422 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10422/ to view the results. From ci at centos.org Fri Jun 21 01:01:16 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 21 Jun 2019 01:01:16 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10425 - Failure! (release-6 on CentOS-7/x86_64) Message-ID: <477228647.1540.1561078876317.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10425 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10425/ to view the results. From ci at centos.org Fri Jun 21 01:16:03 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 21 Jun 2019 01:16:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #230 In-Reply-To: <175369557.1264.1560993466127.JavaMail.jenkins@jenkins.ci.centos.org> References: <175369557.1264.1560993466127.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <942566762.1543.1561079763854.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 22 00:16:05 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 22 Jun 2019 00:16:05 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #402 In-Reply-To: <1593419348.1527.1561076181820.JavaMail.jenkins@jenkins.ci.centos.org> References: <1593419348.1527.1561076181820.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2088618550.1760.1561162565469.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.60 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.3-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : mock-core-configs-30.3-1.el7.noarch 34/52 Verifying : usermode-1.111-5.el7.x86_64 35/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 36/52 Verifying : libproxy-0.4.11-11.el7.x86_64 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.3-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2032 0 --:--:-- --:--:-- --:--:-- 2043 78 8513k 78 6646k 0 0 6677k 0 0:00:01 --:--:-- 0:00:01 6677k100 8513k 100 8513k 0 0 7593k 0 0:00:01 0:00:01 --:--:-- 14.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1991 0 --:--:-- --:--:-- --:--:-- 1996 41 38.3M 41 16.0M 0 0 18.8M 0 0:00:02 --:--:-- 0:00:02 18.8M100 38.3M 100 38.3M 0 0 33.0M 0 0:00:01 0:00:01 --:--:-- 72.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 604 0 --:--:-- --:--:-- --:--:-- 604 0 0 0 620 0 0 1923 0 --:--:-- --:--:-- --:--:-- 1923 100 10.7M 100 10.7M 0 0 12.8M 0 --:--:-- --:--:-- --:--:-- 12.8M ~/nightlyrpm5NOlEa/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm5NOlEa/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm5NOlEa/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm5NOlEa ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm5NOlEa/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm5NOlEa/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 2436ea041dfc481f801002971c579558 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.qmknn8o5:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8942344355311077702.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 04ee641f +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 258 | n3.gusty | 172.19.2.131 | gusty | 3685 | Deployed | 04ee641f | None | None | 7 | x86_64 | 1 | 2020 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 22 00:41:00 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 22 Jun 2019 00:41:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #206 In-Reply-To: <1211203994.1532.1561077654725.JavaMail.jenkins@jenkins.ci.centos.org> References: <1211203994.1532.1561077654725.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <234355449.1764.1561164060910.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.46 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 22 June 2019 01:40:18 +0100 (0:00:00.290) 0:03:02.292 ********* TASK [container-engine/docker : check length of search domains] **************** Saturday 22 June 2019 01:40:18 +0100 (0:00:00.299) 0:03:02.591 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 22 June 2019 01:40:18 +0100 (0:00:00.304) 0:03:02.896 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 22 June 2019 01:40:18 +0100 (0:00:00.286) 0:03:03.182 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 22 June 2019 01:40:19 +0100 (0:00:00.563) 0:03:03.746 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 22 June 2019 01:40:20 +0100 (0:00:01.363) 0:03:05.109 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 22 June 2019 01:40:21 +0100 (0:00:00.254) 0:03:05.364 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 22 June 2019 01:40:21 +0100 (0:00:00.255) 0:03:05.620 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 22 June 2019 01:40:21 +0100 (0:00:00.311) 0:03:05.931 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 22 June 2019 01:40:22 +0100 (0:00:00.301) 0:03:06.232 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 22 June 2019 01:40:22 +0100 (0:00:00.280) 0:03:06.513 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 22 June 2019 01:40:22 +0100 (0:00:00.287) 0:03:06.800 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 22 June 2019 01:40:22 +0100 (0:00:00.277) 0:03:07.078 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 22 June 2019 01:40:23 +0100 (0:00:00.274) 0:03:07.352 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 22 June 2019 01:40:23 +0100 (0:00:00.362) 0:03:07.715 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 22 June 2019 01:40:23 +0100 (0:00:00.333) 0:03:08.049 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 22 June 2019 01:40:24 +0100 (0:00:00.286) 0:03:08.336 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 22 June 2019 01:40:24 +0100 (0:00:00.275) 0:03:08.612 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 22 June 2019 01:40:24 +0100 (0:00:00.296) 0:03:08.909 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 22 June 2019 01:40:26 +0100 (0:00:01.936) 0:03:10.845 ********* ok: [kube3] ok: [kube2] ok: [kube1] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 22 June 2019 01:40:27 +0100 (0:00:01.177) 0:03:12.022 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 22 June 2019 01:40:28 +0100 (0:00:00.352) 0:03:12.375 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 22 June 2019 01:40:29 +0100 (0:00:01.135) 0:03:13.510 ********* TASK [container-engine/docker : get systemd version] *************************** Saturday 22 June 2019 01:40:29 +0100 (0:00:00.331) 0:03:13.841 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 22 June 2019 01:40:29 +0100 (0:00:00.305) 0:03:14.147 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 22 June 2019 01:40:30 +0100 (0:00:00.307) 0:03:14.455 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 22 June 2019 01:40:32 +0100 (0:00:02.127) 0:03:16.583 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 22 June 2019 01:40:34 +0100 (0:00:02.064) 0:03:18.647 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 22 June 2019 01:40:34 +0100 (0:00:00.334) 0:03:18.981 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 22 June 2019 01:40:34 +0100 (0:00:00.240) 0:03:19.222 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 22 June 2019 01:40:36 +0100 (0:00:01.040) 0:03:20.263 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 22 June 2019 01:40:37 +0100 (0:00:01.191) 0:03:21.454 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 22 June 2019 01:40:37 +0100 (0:00:00.354) 0:03:21.809 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 22 June 2019 01:40:41 +0100 (0:00:04.346) 0:03:26.156 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 22 June 2019 01:40:52 +0100 (0:00:10.240) 0:03:36.397 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 22 June 2019 01:40:53 +0100 (0:00:01.252) 0:03:37.649 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 22 June 2019 01:40:54 +0100 (0:00:01.389) 0:03:39.038 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 22 June 2019 01:40:55 +0100 (0:00:00.513) 0:03:39.552 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 22 June 2019 01:40:56 +0100 (0:00:01.161) 0:03:40.713 ********* changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 22 June 2019 01:40:57 +0100 (0:00:01.081) 0:03:41.795 ********* TASK [download : Download items] *********************************************** Saturday 22 June 2019 01:40:57 +0100 (0:00:00.123) 0:03:41.918 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 22 June 2019 01:41:00 +0100 (0:00:02.804) 0:03:44.723 ********* =============================================================================== Install packages ------------------------------------------------------- 33.87s Wait for host to be available ------------------------------------------ 23.91s gather facts from all instances ---------------------------------------- 17.27s container-engine/docker : Docker | pause while Docker restarts --------- 10.24s Persist loaded modules -------------------------------------------------- 6.03s container-engine/docker : Docker | reload docker ------------------------ 4.35s kubernetes/preinstall : Create kubernetes directories ------------------- 4.05s download : Download items ----------------------------------------------- 2.80s Load required kernel modules -------------------------------------------- 2.74s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.66s kubernetes/preinstall : Create cni directories -------------------------- 2.48s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.46s Extend root VG ---------------------------------------------------------- 2.32s container-engine/docker : Write docker options systemd drop-in ---------- 2.13s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.11s container-engine/docker : Write docker dns systemd drop-in -------------- 2.06s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.05s Gathering Facts --------------------------------------------------------- 2.05s download : Sync container ----------------------------------------------- 2.05s download : Download items ----------------------------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 22 01:01:35 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 22 Jun 2019 01:01:35 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10434 - Failure! (release-6 on CentOS-6/x86_64) Message-ID: <1378830576.1766.1561165295766.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10434 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10434/ to view the results. From ci at centos.org Sat Jun 22 01:12:41 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 22 Jun 2019 01:12:41 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #231 In-Reply-To: <942566762.1543.1561079763854.JavaMail.jenkins@jenkins.ci.centos.org> References: <942566762.1543.1561079763854.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1971497623.1770.1561165961873.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 34.82 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 23 00:16:11 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 23 Jun 2019 00:16:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #403 In-Reply-To: <2088618550.1760.1561162565469.JavaMail.jenkins@jenkins.ci.centos.org> References: <2088618550.1760.1561162565469.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2045014388.1902.1561248971500.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.61 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2111 0 --:--:-- --:--:-- --:--:-- 2130 100 8513k 100 8513k 0 0 11.2M 0 --:--:-- --:--:-- --:--:-- 11.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2097 0 --:--:-- --:--:-- --:--:-- 2096 100 38.3M 100 38.3M 0 0 45.3M 0 --:--:-- --:--:-- --:--:-- 45.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 584 0 --:--:-- --:--:-- --:--:-- 586 0 0 0 620 0 0 1732 0 --:--:-- --:--:-- --:--:-- 1732 100 10.7M 100 10.7M 0 0 17.2M 0 --:--:-- --:--:-- --:--:-- 17.2M ~/nightlyrpm1csv8O/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm1csv8O/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm1csv8O/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm1csv8O ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm1csv8O/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm1csv8O/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M bf5454530b6e42228520e58dbb80661a -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.u2gdailu:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2830822174939208736.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 959c933a +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 141 | n14.crusty | 172.19.2.14 | crusty | 3710 | Deployed | 959c933a | None | None | 7 | x86_64 | 1 | 2130 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 23 00:37:52 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 23 Jun 2019 00:37:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #207 In-Reply-To: <234355449.1764.1561164060910.JavaMail.jenkins@jenkins.ci.centos.org> References: <234355449.1764.1561164060910.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <384455401.1905.1561250272237.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.36 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 23 June 2019 01:37:26 +0100 (0:00:00.137) 0:01:57.692 *********** TASK [container-engine/docker : check length of search domains] **************** Sunday 23 June 2019 01:37:26 +0100 (0:00:00.126) 0:01:57.819 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 23 June 2019 01:37:26 +0100 (0:00:00.127) 0:01:57.946 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 23 June 2019 01:37:26 +0100 (0:00:00.123) 0:01:58.070 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 23 June 2019 01:37:26 +0100 (0:00:00.246) 0:01:58.316 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 23 June 2019 01:37:27 +0100 (0:00:00.632) 0:01:58.949 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 23 June 2019 01:37:27 +0100 (0:00:00.117) 0:01:59.066 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 23 June 2019 01:37:27 +0100 (0:00:00.122) 0:01:59.189 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 23 June 2019 01:37:27 +0100 (0:00:00.139) 0:01:59.328 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 23 June 2019 01:37:27 +0100 (0:00:00.128) 0:01:59.457 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 23 June 2019 01:37:27 +0100 (0:00:00.124) 0:01:59.582 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 23 June 2019 01:37:28 +0100 (0:00:00.123) 0:01:59.705 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 23 June 2019 01:37:28 +0100 (0:00:00.121) 0:01:59.826 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 23 June 2019 01:37:28 +0100 (0:00:00.121) 0:01:59.948 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 23 June 2019 01:37:28 +0100 (0:00:00.156) 0:02:00.105 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 23 June 2019 01:37:28 +0100 (0:00:00.146) 0:02:00.252 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 23 June 2019 01:37:28 +0100 (0:00:00.129) 0:02:00.381 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 23 June 2019 01:37:28 +0100 (0:00:00.121) 0:02:00.502 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 23 June 2019 01:37:29 +0100 (0:00:00.125) 0:02:00.628 *********** ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 23 June 2019 01:37:29 +0100 (0:00:00.881) 0:02:01.509 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 23 June 2019 01:37:30 +0100 (0:00:00.627) 0:02:02.136 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 23 June 2019 01:37:30 +0100 (0:00:00.126) 0:02:02.262 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 23 June 2019 01:37:31 +0100 (0:00:00.447) 0:02:02.710 *********** TASK [container-engine/docker : get systemd version] *************************** Sunday 23 June 2019 01:37:31 +0100 (0:00:00.151) 0:02:02.862 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 23 June 2019 01:37:31 +0100 (0:00:00.144) 0:02:03.006 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 23 June 2019 01:37:31 +0100 (0:00:00.142) 0:02:03.148 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 23 June 2019 01:37:32 +0100 (0:00:00.919) 0:02:04.067 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 23 June 2019 01:37:33 +0100 (0:00:00.906) 0:02:04.973 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 23 June 2019 01:37:33 +0100 (0:00:00.150) 0:02:05.124 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 23 June 2019 01:37:33 +0100 (0:00:00.107) 0:02:05.232 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 23 June 2019 01:37:34 +0100 (0:00:00.523) 0:02:05.755 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 23 June 2019 01:37:34 +0100 (0:00:00.547) 0:02:06.302 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 23 June 2019 01:37:34 +0100 (0:00:00.131) 0:02:06.434 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 23 June 2019 01:37:37 +0100 (0:00:03.147) 0:02:09.581 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 23 June 2019 01:37:48 +0100 (0:00:10.103) 0:02:19.684 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 23 June 2019 01:37:48 +0100 (0:00:00.517) 0:02:20.202 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 23 June 2019 01:37:49 +0100 (0:00:00.714) 0:02:20.916 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 23 June 2019 01:37:49 +0100 (0:00:00.208) 0:02:21.125 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 23 June 2019 01:37:50 +0100 (0:00:00.639) 0:02:21.765 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 23 June 2019 01:37:50 +0100 (0:00:00.449) 0:02:22.214 *********** TASK [download : Download items] *********************************************** Sunday 23 June 2019 01:37:50 +0100 (0:00:00.059) 0:02:22.274 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 23 June 2019 01:37:51 +0100 (0:00:01.324) 0:02:23.599 *********** =============================================================================== Install packages ------------------------------------------------------- 25.15s Extend root VG --------------------------------------------------------- 16.41s Wait for host to be available ------------------------------------------ 16.36s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s gather facts from all instances ----------------------------------------- 9.64s container-engine/docker : Docker | reload docker ------------------------ 3.15s Persist loaded modules -------------------------------------------------- 2.64s kubernetes/preinstall : Create kubernetes directories ------------------- 1.89s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.51s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.37s Load required kernel modules -------------------------------------------- 1.37s download : Download items ----------------------------------------------- 1.32s Extend the root LV and FS to occupy remaining space --------------------- 1.28s Gathering Facts --------------------------------------------------------- 1.20s download : Download items ----------------------------------------------- 1.19s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.16s kubernetes/preinstall : Create cni directories -------------------------- 1.13s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.09s download : Sync container ----------------------------------------------- 1.06s bootstrap-os : check if atomic host ------------------------------------- 1.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 23 01:22:27 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 23 Jun 2019 01:22:27 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #232 In-Reply-To: <1971497623.1770.1561165961873.JavaMail.jenkins@jenkins.ci.centos.org> References: <1971497623.1770.1561165961873.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2142847659.1917.1561252947849.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.41 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 24 00:16:12 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 24 Jun 2019 00:16:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #404 In-Reply-To: <2045014388.1902.1561248971500.JavaMail.jenkins@jenkins.ci.centos.org> References: <2045014388.1902.1561248971500.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1266851674.2029.1561335372499.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.61 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1823 0 --:--:-- --:--:-- --:--:-- 1822 100 8513k 100 8513k 0 0 11.9M 0 --:--:-- --:--:-- --:--:-- 11.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1995 0 --:--:-- --:--:-- --:--:-- 2003 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 45.2M 0 --:--:-- --:--:-- --:--:-- 94.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 527 0 --:--:-- --:--:-- --:--:-- 527 0 0 0 620 0 0 1704 0 --:--:-- --:--:-- --:--:-- 1704 100 10.7M 100 10.7M 0 0 14.4M 0 --:--:-- --:--:-- --:--:-- 14.4M ~/nightlyrpm3Jj6dU/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm3Jj6dU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm3Jj6dU/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm3Jj6dU ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm3Jj6dU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm3Jj6dU/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ab193f5d6c4f40c1a18fd3318e7ba495 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.5lki3y5o:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8043679581486374080.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 70490574 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 270 | n15.gusty | 172.19.2.143 | gusty | 3685 | Deployed | 70490574 | None | None | 7 | x86_64 | 1 | 2140 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 24 00:40:53 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 24 Jun 2019 00:40:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #208 In-Reply-To: <384455401.1905.1561250272237.JavaMail.jenkins@jenkins.ci.centos.org> References: <384455401.1905.1561250272237.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1564399888.2033.1561336853111.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.51 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 24 June 2019 01:40:10 +0100 (0:00:00.300) 0:02:58.672 *********** TASK [container-engine/docker : check length of search domains] **************** Monday 24 June 2019 01:40:10 +0100 (0:00:00.312) 0:02:58.985 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Monday 24 June 2019 01:40:11 +0100 (0:00:00.305) 0:02:59.291 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 24 June 2019 01:40:11 +0100 (0:00:00.282) 0:02:59.573 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 24 June 2019 01:40:12 +0100 (0:00:00.528) 0:03:00.102 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 24 June 2019 01:40:13 +0100 (0:00:01.340) 0:03:01.443 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 24 June 2019 01:40:13 +0100 (0:00:00.254) 0:03:01.697 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 24 June 2019 01:40:13 +0100 (0:00:00.318) 0:03:02.015 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 24 June 2019 01:40:14 +0100 (0:00:00.325) 0:03:02.341 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 24 June 2019 01:40:14 +0100 (0:00:00.301) 0:03:02.643 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 24 June 2019 01:40:14 +0100 (0:00:00.289) 0:03:02.932 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 24 June 2019 01:40:15 +0100 (0:00:00.286) 0:03:03.219 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 24 June 2019 01:40:15 +0100 (0:00:00.288) 0:03:03.508 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 24 June 2019 01:40:15 +0100 (0:00:00.288) 0:03:03.796 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 24 June 2019 01:40:16 +0100 (0:00:00.390) 0:03:04.186 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 24 June 2019 01:40:16 +0100 (0:00:00.356) 0:03:04.543 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 24 June 2019 01:40:16 +0100 (0:00:00.313) 0:03:04.856 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 24 June 2019 01:40:17 +0100 (0:00:00.279) 0:03:05.136 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 24 June 2019 01:40:17 +0100 (0:00:00.278) 0:03:05.415 *********** ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 24 June 2019 01:40:19 +0100 (0:00:01.881) 0:03:07.297 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 24 June 2019 01:40:20 +0100 (0:00:01.216) 0:03:08.513 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 24 June 2019 01:40:20 +0100 (0:00:00.332) 0:03:08.846 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 24 June 2019 01:40:21 +0100 (0:00:01.067) 0:03:09.914 *********** TASK [container-engine/docker : get systemd version] *************************** Monday 24 June 2019 01:40:22 +0100 (0:00:00.347) 0:03:10.262 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 24 June 2019 01:40:22 +0100 (0:00:00.313) 0:03:10.575 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 24 June 2019 01:40:22 +0100 (0:00:00.304) 0:03:10.880 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 24 June 2019 01:40:24 +0100 (0:00:02.123) 0:03:13.003 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 24 June 2019 01:40:27 +0100 (0:00:02.056) 0:03:15.060 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 24 June 2019 01:40:27 +0100 (0:00:00.350) 0:03:15.410 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 24 June 2019 01:40:27 +0100 (0:00:00.242) 0:03:15.653 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 24 June 2019 01:40:28 +0100 (0:00:00.980) 0:03:16.634 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 24 June 2019 01:40:29 +0100 (0:00:01.184) 0:03:17.819 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 24 June 2019 01:40:30 +0100 (0:00:00.275) 0:03:18.094 *********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 24 June 2019 01:40:34 +0100 (0:00:04.186) 0:03:22.281 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 24 June 2019 01:40:44 +0100 (0:00:10.190) 0:03:32.472 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 24 June 2019 01:40:45 +0100 (0:00:01.268) 0:03:33.741 *********** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 24 June 2019 01:40:47 +0100 (0:00:01.296) 0:03:35.037 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 24 June 2019 01:40:47 +0100 (0:00:00.505) 0:03:35.543 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 24 June 2019 01:40:48 +0100 (0:00:01.207) 0:03:36.750 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 24 June 2019 01:40:49 +0100 (0:00:01.071) 0:03:37.822 *********** TASK [download : Download items] *********************************************** Monday 24 June 2019 01:40:49 +0100 (0:00:00.126) 0:03:37.948 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 24 June 2019 01:40:52 +0100 (0:00:02.778) 0:03:40.727 *********** =============================================================================== Install packages ------------------------------------------------------- 34.49s Wait for host to be available ------------------------------------------ 21.34s gather facts from all instances ---------------------------------------- 16.66s container-engine/docker : Docker | pause while Docker restarts --------- 10.19s Persist loaded modules -------------------------------------------------- 6.03s container-engine/docker : Docker | reload docker ------------------------ 4.19s kubernetes/preinstall : Create kubernetes directories ------------------- 3.93s download : Download items ----------------------------------------------- 2.78s Load required kernel modules -------------------------------------------- 2.65s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.63s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.62s Extend root VG ---------------------------------------------------------- 2.40s kubernetes/preinstall : Create cni directories -------------------------- 2.38s Gathering Facts --------------------------------------------------------- 2.26s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.26s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.23s container-engine/docker : Write docker options systemd drop-in ---------- 2.12s container-engine/docker : Write docker dns systemd drop-in -------------- 2.06s download : Sync container ----------------------------------------------- 1.91s download : Download items ----------------------------------------------- 1.90s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon Jun 24 01:11:49 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 24 Jun 2019 01:11:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #233 In-Reply-To: <2142847659.1917.1561252947849.JavaMail.jenkins@jenkins.ci.centos.org> References: <2142847659.1917.1561252947849.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1481963468.2037.1561338709381.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 25 00:15:58 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 25 Jun 2019 00:15:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #405 In-Reply-To: <1266851674.2029.1561335372499.JavaMail.jenkins@jenkins.ci.centos.org> References: <1266851674.2029.1561335372499.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <914137926.2125.1561421758556.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Run clang-analyzer for gluster-block ------------------------------------------ [...truncated 38.63 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1956 0 --:--:-- --:--:-- --:--:-- 1964 100 8513k 100 8513k 0 0 12.6M 0 --:--:-- --:--:-- --:--:-- 12.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2131 0 --:--:-- --:--:-- --:--:-- 2132 100 38.3M 100 38.3M 0 0 42.5M 0 --:--:-- --:--:-- --:--:-- 42.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 539 0 --:--:-- --:--:-- --:--:-- 540 0 0 0 620 0 0 1728 0 --:--:-- --:--:-- --:--:-- 1728 100 10.7M 100 10.7M 0 0 12.9M 0 --:--:-- --:--:-- --:--:-- 12.9M ~/nightlyrpmKd9F6Z/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmKd9F6Z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmKd9F6Z/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmKd9F6Z ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmKd9F6Z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmKd9F6Z/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8c679b2140464193ae81117c720defa2 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jkao33rf:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1128589062726433.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 79e97967 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 139 | n12.crusty | 172.19.2.12 | crusty | 3717 | Deployed | 79e97967 | None | None | 7 | x86_64 | 1 | 2110 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 25 00:40:54 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 25 Jun 2019 00:40:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #209 In-Reply-To: <1564399888.2033.1561336853111.JavaMail.jenkins@jenkins.ci.centos.org> References: <1564399888.2033.1561336853111.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1577119149.2126.1561423254066.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Run clang-analyzer for gluster-block ------------------------------------------ [...truncated 287.54 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 25 June 2019 01:40:12 +0100 (0:00:00.284) 0:03:02.528 ********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 25 June 2019 01:40:12 +0100 (0:00:00.279) 0:03:02.808 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 25 June 2019 01:40:12 +0100 (0:00:00.310) 0:03:03.119 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 25 June 2019 01:40:12 +0100 (0:00:00.289) 0:03:03.409 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 25 June 2019 01:40:13 +0100 (0:00:00.641) 0:03:04.050 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 25 June 2019 01:40:14 +0100 (0:00:01.295) 0:03:05.346 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 25 June 2019 01:40:15 +0100 (0:00:00.257) 0:03:05.603 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 25 June 2019 01:40:15 +0100 (0:00:00.268) 0:03:05.872 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 25 June 2019 01:40:15 +0100 (0:00:00.310) 0:03:06.182 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 25 June 2019 01:40:15 +0100 (0:00:00.303) 0:03:06.486 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 25 June 2019 01:40:16 +0100 (0:00:00.287) 0:03:06.773 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 25 June 2019 01:40:16 +0100 (0:00:00.281) 0:03:07.055 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 25 June 2019 01:40:16 +0100 (0:00:00.277) 0:03:07.332 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 25 June 2019 01:40:17 +0100 (0:00:00.314) 0:03:07.647 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 25 June 2019 01:40:17 +0100 (0:00:00.347) 0:03:07.994 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 25 June 2019 01:40:17 +0100 (0:00:00.328) 0:03:08.323 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 25 June 2019 01:40:18 +0100 (0:00:00.276) 0:03:08.600 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 25 June 2019 01:40:18 +0100 (0:00:00.289) 0:03:08.890 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 25 June 2019 01:40:18 +0100 (0:00:00.299) 0:03:09.189 ********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 25 June 2019 01:40:20 +0100 (0:00:02.062) 0:03:11.252 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 25 June 2019 01:40:21 +0100 (0:00:01.113) 0:03:12.365 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 25 June 2019 01:40:22 +0100 (0:00:00.290) 0:03:12.656 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 25 June 2019 01:40:23 +0100 (0:00:01.113) 0:03:13.769 ********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 25 June 2019 01:40:23 +0100 (0:00:00.299) 0:03:14.068 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 25 June 2019 01:40:23 +0100 (0:00:00.312) 0:03:14.380 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 25 June 2019 01:40:24 +0100 (0:00:00.297) 0:03:14.678 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 25 June 2019 01:40:26 +0100 (0:00:01.953) 0:03:16.632 ********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 25 June 2019 01:40:28 +0100 (0:00:02.060) 0:03:18.692 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 25 June 2019 01:40:28 +0100 (0:00:00.322) 0:03:19.015 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 25 June 2019 01:40:28 +0100 (0:00:00.230) 0:03:19.246 ********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 25 June 2019 01:40:29 +0100 (0:00:00.978) 0:03:20.225 ********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 25 June 2019 01:40:30 +0100 (0:00:01.129) 0:03:21.354 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 25 June 2019 01:40:31 +0100 (0:00:00.295) 0:03:21.650 ********** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 25 June 2019 01:40:35 +0100 (0:00:04.076) 0:03:25.726 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 25 June 2019 01:40:45 +0100 (0:00:10.197) 0:03:35.924 ********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 25 June 2019 01:40:46 +0100 (0:00:01.254) 0:03:37.179 ********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 25 June 2019 01:40:48 +0100 (0:00:01.347) 0:03:38.526 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 25 June 2019 01:40:48 +0100 (0:00:00.502) 0:03:39.028 ********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 25 June 2019 01:40:49 +0100 (0:00:01.170) 0:03:40.199 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 25 June 2019 01:40:50 +0100 (0:00:01.091) 0:03:41.290 ********** TASK [download : Download items] *********************************************** Tuesday 25 June 2019 01:40:50 +0100 (0:00:00.128) 0:03:41.418 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 25 June 2019 01:40:53 +0100 (0:00:02.743) 0:03:44.161 ********** =============================================================================== Install packages ------------------------------------------------------- 34.74s Wait for host to be available ------------------------------------------ 23.97s gather facts from all instances ---------------------------------------- 17.61s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 6.17s kubernetes/preinstall : Create kubernetes directories ------------------- 4.15s container-engine/docker : Docker | reload docker ------------------------ 4.08s download : Download items ----------------------------------------------- 2.74s Load required kernel modules -------------------------------------------- 2.60s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.60s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.49s kubernetes/preinstall : Create cni directories -------------------------- 2.49s Extend root VG ---------------------------------------------------------- 2.35s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.15s download : Sync container ----------------------------------------------- 2.11s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.07s container-engine/docker : ensure service is started if docker packages are already present --- 2.06s container-engine/docker : Write docker dns systemd drop-in -------------- 2.06s download : Download items ----------------------------------------------- 2.04s Gathering Facts --------------------------------------------------------- 2.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue Jun 25 01:23:09 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 25 Jun 2019 01:23:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #234 In-Reply-To: <1481963468.2037.1561338709381.JavaMail.jenkins@jenkins.ci.centos.org> References: <1481963468.2037.1561338709381.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1355372426.2133.1561425789978.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Run clang-analyzer for gluster-block ------------------------------------------ [...truncated 56.43 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 26 00:16:08 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 26 Jun 2019 00:16:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #406 In-Reply-To: <914137926.2125.1561421758556.JavaMail.jenkins@jenkins.ci.centos.org> References: <914137926.2125.1561421758556.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1399840730.2222.1561508168141.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.60 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1827 0 --:--:-- --:--:-- --:--:-- 1833 100 8513k 100 8513k 0 0 12.5M 0 --:--:-- --:--:-- --:--:-- 12.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2197 0 --:--:-- --:--:-- --:--:-- 2200 99 38.3M 99 38.2M 0 0 35.2M 0 0:00:01 0:00:01 --:--:-- 35.2M100 38.3M 100 38.3M 0 0 35.3M 0 0:00:01 0:00:01 --:--:-- 173M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 571 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1690 0 --:--:-- --:--:-- --:--:-- 1690 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 47.4M ~/nightlyrpm8GrENX/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm8GrENX/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm8GrENX/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm8GrENX ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm8GrENX/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm8GrENX/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 29 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 37a8d29ed13d41b78ce8387f11b1a485 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.wz9wlhfl:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7593902029924254697.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 878afeac +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 157 | n30.crusty | 172.19.2.30 | crusty | 3722 | Deployed | 878afeac | None | None | 7 | x86_64 | 1 | 2290 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 26 00:40:50 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 26 Jun 2019 00:40:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #210 In-Reply-To: <1577119149.2126.1561423254066.JavaMail.jenkins@jenkins.ci.centos.org> References: <1577119149.2126.1561423254066.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1993745910.2223.1561509650962.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.54 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 26 June 2019 01:40:08 +0100 (0:00:00.289) 0:02:58.310 ******** TASK [container-engine/docker : check length of search domains] **************** Wednesday 26 June 2019 01:40:08 +0100 (0:00:00.287) 0:02:58.598 ******** TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 26 June 2019 01:40:08 +0100 (0:00:00.298) 0:02:58.897 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 26 June 2019 01:40:09 +0100 (0:00:00.372) 0:02:59.270 ******** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 26 June 2019 01:40:09 +0100 (0:00:00.553) 0:02:59.824 ******** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 26 June 2019 01:40:11 +0100 (0:00:01.361) 0:03:01.185 ******** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 26 June 2019 01:40:11 +0100 (0:00:00.267) 0:03:01.453 ******** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 26 June 2019 01:40:11 +0100 (0:00:00.250) 0:03:01.703 ******** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 26 June 2019 01:40:11 +0100 (0:00:00.346) 0:03:02.049 ******** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 26 June 2019 01:40:12 +0100 (0:00:00.330) 0:03:02.380 ******** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 26 June 2019 01:40:12 +0100 (0:00:00.284) 0:03:02.664 ******** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 26 June 2019 01:40:12 +0100 (0:00:00.276) 0:03:02.941 ******** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 26 June 2019 01:40:13 +0100 (0:00:00.287) 0:03:03.229 ******** TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 26 June 2019 01:40:13 +0100 (0:00:00.293) 0:03:03.522 ******** TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 26 June 2019 01:40:13 +0100 (0:00:00.373) 0:03:03.896 ******** TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 26 June 2019 01:40:14 +0100 (0:00:00.347) 0:03:04.243 ******** TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 26 June 2019 01:40:14 +0100 (0:00:00.333) 0:03:04.577 ******** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 26 June 2019 01:40:14 +0100 (0:00:00.326) 0:03:04.903 ******** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 26 June 2019 01:40:15 +0100 (0:00:00.318) 0:03:05.222 ******** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 26 June 2019 01:40:17 +0100 (0:00:01.948) 0:03:07.170 ******** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 26 June 2019 01:40:18 +0100 (0:00:01.113) 0:03:08.283 ******** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 26 June 2019 01:40:18 +0100 (0:00:00.282) 0:03:08.566 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 26 June 2019 01:40:19 +0100 (0:00:01.015) 0:03:09.581 ******** TASK [container-engine/docker : get systemd version] *************************** Wednesday 26 June 2019 01:40:19 +0100 (0:00:00.383) 0:03:09.965 ******** TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 26 June 2019 01:40:20 +0100 (0:00:00.331) 0:03:10.296 ******** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 26 June 2019 01:40:20 +0100 (0:00:00.301) 0:03:10.598 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 26 June 2019 01:40:22 +0100 (0:00:02.146) 0:03:12.744 ******** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 26 June 2019 01:40:24 +0100 (0:00:02.078) 0:03:14.823 ******** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 26 June 2019 01:40:25 +0100 (0:00:00.328) 0:03:15.151 ******** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 26 June 2019 01:40:25 +0100 (0:00:00.232) 0:03:15.384 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 26 June 2019 01:40:26 +0100 (0:00:00.985) 0:03:16.370 ******** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 26 June 2019 01:40:27 +0100 (0:00:01.047) 0:03:17.418 ******** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 26 June 2019 01:40:27 +0100 (0:00:00.280) 0:03:17.698 ******** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 26 June 2019 01:40:31 +0100 (0:00:04.261) 0:03:21.960 ******** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 26 June 2019 01:40:42 +0100 (0:00:10.220) 0:03:32.180 ******** changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 26 June 2019 01:40:43 +0100 (0:00:01.280) 0:03:33.461 ******** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 26 June 2019 01:40:44 +0100 (0:00:01.239) 0:03:34.701 ******** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 26 June 2019 01:40:45 +0100 (0:00:00.515) 0:03:35.216 ******** ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 26 June 2019 01:40:46 +0100 (0:00:01.235) 0:03:36.452 ******** changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 26 June 2019 01:40:47 +0100 (0:00:01.043) 0:03:37.495 ******** TASK [download : Download items] *********************************************** Wednesday 26 June 2019 01:40:47 +0100 (0:00:00.148) 0:03:37.643 ******** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 26 June 2019 01:40:50 +0100 (0:00:02.866) 0:03:40.510 ******** =============================================================================== Install packages ------------------------------------------------------- 34.49s Wait for host to be available ------------------------------------------ 21.44s gather facts from all instances ---------------------------------------- 16.44s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.05s container-engine/docker : Docker | reload docker ------------------------ 4.26s kubernetes/preinstall : Create kubernetes directories ------------------- 3.82s download : Download items ----------------------------------------------- 2.87s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s Load required kernel modules -------------------------------------------- 2.61s kubernetes/preinstall : Create cni directories -------------------------- 2.41s Extend root VG ---------------------------------------------------------- 2.34s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.30s Gathering Facts --------------------------------------------------------- 2.20s container-engine/docker : Write docker options systemd drop-in ---------- 2.15s container-engine/docker : Write docker dns systemd drop-in -------------- 2.08s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.04s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.03s container-engine/docker : ensure service is started if docker packages are already present --- 1.95s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed Jun 26 01:17:51 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 26 Jun 2019 01:17:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #235 In-Reply-To: <1355372426.2133.1561425789978.JavaMail.jenkins@jenkins.ci.centos.org> References: <1355372426.2133.1561425789978.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1677122934.2229.1561511871591.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.43 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 27 00:16:08 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 27 Jun 2019 00:16:08 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #407 In-Reply-To: <1399840730.2222.1561508168141.JavaMail.jenkins@jenkins.ci.centos.org> References: <1399840730.2222.1561508168141.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <699309315.2332.1561594568479.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.64 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1988 0 --:--:-- --:--:-- --:--:-- 2003 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 9430k 0 --:--:-- --:--:-- --:--:-- 25.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1948 0 --:--:-- --:--:-- --:--:-- 1953 5 38.3M 5 2243k 0 0 3478k 0 0:00:11 --:--:-- 0:00:11 3478k100 38.3M 100 38.3M 0 0 33.2M 0 0:00:01 0:00:01 --:--:-- 70.9M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 550 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1661 0 --:--:-- --:--:-- --:--:-- 1661 14 10.7M 14 1579k 0 0 1470k 0 0:00:07 0:00:01 0:00:06 1470k 65 10.7M 65 7172k 0 0 3460k 0 0:00:03 0:00:02 0:00:01 5598k100 10.7M 100 10.7M 0 0 4298k 0 0:00:02 0:00:02 --:--:-- 6350k ~/nightlyrpmknRuNN/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmknRuNN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmknRuNN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmknRuNN ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmknRuNN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmknRuNN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d46318c1cc264710a19b434d63a6ae69 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.9aloalfs:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3816831917969457897.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 4721ee57 +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 264 | n9.gusty | 172.19.2.137 | gusty | 3719 | Deployed | 4721ee57 | None | None | 7 | x86_64 | 1 | 2080 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 27 00:40:53 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 27 Jun 2019 00:40:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #211 In-Reply-To: <1993745910.2223.1561509650962.JavaMail.jenkins@jenkins.ci.centos.org> References: <1993745910.2223.1561509650962.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <980766027.2335.1561596054120.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.52 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 27 June 2019 01:40:11 +0100 (0:00:00.297) 0:03:01.809 ********* TASK [container-engine/docker : check length of search domains] **************** Thursday 27 June 2019 01:40:12 +0100 (0:00:00.288) 0:03:02.098 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 27 June 2019 01:40:12 +0100 (0:00:00.295) 0:03:02.394 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 27 June 2019 01:40:12 +0100 (0:00:00.287) 0:03:02.681 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 27 June 2019 01:40:13 +0100 (0:00:00.620) 0:03:03.302 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 27 June 2019 01:40:14 +0100 (0:00:01.317) 0:03:04.619 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 27 June 2019 01:40:14 +0100 (0:00:00.254) 0:03:04.874 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 27 June 2019 01:40:15 +0100 (0:00:00.252) 0:03:05.126 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 27 June 2019 01:40:15 +0100 (0:00:00.307) 0:03:05.433 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 27 June 2019 01:40:15 +0100 (0:00:00.298) 0:03:05.732 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 27 June 2019 01:40:15 +0100 (0:00:00.285) 0:03:06.018 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 27 June 2019 01:40:16 +0100 (0:00:00.268) 0:03:06.287 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 27 June 2019 01:40:16 +0100 (0:00:00.275) 0:03:06.563 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 27 June 2019 01:40:16 +0100 (0:00:00.296) 0:03:06.860 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 27 June 2019 01:40:17 +0100 (0:00:00.350) 0:03:07.210 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 27 June 2019 01:40:17 +0100 (0:00:00.341) 0:03:07.551 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 27 June 2019 01:40:17 +0100 (0:00:00.278) 0:03:07.830 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 27 June 2019 01:40:18 +0100 (0:00:00.286) 0:03:08.117 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 27 June 2019 01:40:18 +0100 (0:00:00.274) 0:03:08.392 ********* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 27 June 2019 01:40:20 +0100 (0:00:02.045) 0:03:10.437 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 27 June 2019 01:40:21 +0100 (0:00:01.142) 0:03:11.580 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 27 June 2019 01:40:21 +0100 (0:00:00.275) 0:03:11.855 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 27 June 2019 01:40:22 +0100 (0:00:01.036) 0:03:12.892 ********* TASK [container-engine/docker : get systemd version] *************************** Thursday 27 June 2019 01:40:23 +0100 (0:00:00.352) 0:03:13.244 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 27 June 2019 01:40:23 +0100 (0:00:00.400) 0:03:13.645 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 27 June 2019 01:40:23 +0100 (0:00:00.359) 0:03:14.004 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 27 June 2019 01:40:25 +0100 (0:00:01.963) 0:03:15.968 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 27 June 2019 01:40:27 +0100 (0:00:02.042) 0:03:18.011 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 27 June 2019 01:40:28 +0100 (0:00:00.320) 0:03:18.332 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 27 June 2019 01:40:28 +0100 (0:00:00.232) 0:03:18.565 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 27 June 2019 01:40:29 +0100 (0:00:01.045) 0:03:19.610 ********* changed: [kube2] changed: [kube3] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 27 June 2019 01:40:30 +0100 (0:00:01.191) 0:03:20.802 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 27 June 2019 01:40:31 +0100 (0:00:00.333) 0:03:21.135 ********* changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 27 June 2019 01:40:35 +0100 (0:00:04.248) 0:03:25.384 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 27 June 2019 01:40:45 +0100 (0:00:10.198) 0:03:35.583 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 27 June 2019 01:40:46 +0100 (0:00:01.247) 0:03:36.830 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 27 June 2019 01:40:47 +0100 (0:00:01.238) 0:03:38.069 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 27 June 2019 01:40:48 +0100 (0:00:00.526) 0:03:38.595 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 27 June 2019 01:40:49 +0100 (0:00:01.202) 0:03:39.798 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 27 June 2019 01:40:50 +0100 (0:00:00.962) 0:03:40.760 ********* TASK [download : Download items] *********************************************** Thursday 27 June 2019 01:40:50 +0100 (0:00:00.127) 0:03:40.887 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 27 June 2019 01:40:53 +0100 (0:00:02.783) 0:03:43.671 ********* =============================================================================== Install packages ------------------------------------------------------- 33.19s Wait for host to be available ------------------------------------------ 24.07s gather facts from all instances ---------------------------------------- 17.48s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 6.27s container-engine/docker : Docker | reload docker ------------------------ 4.25s kubernetes/preinstall : Create kubernetes directories ------------------- 4.18s Gathering Facts --------------------------------------------------------- 2.80s Load required kernel modules -------------------------------------------- 2.79s download : Download items ----------------------------------------------- 2.78s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.51s Extend root VG ---------------------------------------------------------- 2.42s kubernetes/preinstall : Create cni directories -------------------------- 2.40s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.19s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.07s container-engine/docker : ensure service is started if docker packages are already present --- 2.05s container-engine/docker : Write docker dns systemd drop-in -------------- 2.04s download : Download items ----------------------------------------------- 2.04s Extend the root LV and FS to occupy remaining space --------------------- 1.98s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu Jun 27 01:23:10 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 27 Jun 2019 01:23:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #236 In-Reply-To: <1677122934.2229.1561511871591.JavaMail.jenkins@jenkins.ci.centos.org> References: <1677122934.2229.1561511871591.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1672101408.2341.1561598590621.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.49 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 28 00:16:00 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 28 Jun 2019 00:16:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #408 In-Reply-To: <699309315.2332.1561594568479.JavaMail.jenkins@jenkins.ci.centos.org> References: <699309315.2332.1561594568479.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2143336994.2451.1561680960082.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [prasanna.kalever] gluster-block: add gperftools-devel as dependency ------------------------------------------ [...truncated 38.66 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1771 0 --:--:-- --:--:-- --:--:-- 1779 100 8513k 100 8513k 0 0 10.5M 0 --:--:-- --:--:-- --:--:-- 10.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2178 0 --:--:-- --:--:-- --:--:-- 2177 100 38.3M 100 38.3M 0 0 47.7M 0 --:--:-- --:--:-- --:--:-- 47.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 552 0 --:--:-- --:--:-- --:--:-- 554 0 0 0 620 0 0 1694 0 --:--:-- --:--:-- --:--:-- 1694 100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.5M ~/nightlyrpmWHD7WN/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmWHD7WN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmWHD7WN/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmWHD7WN ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmWHD7WN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmWHD7WN/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8478856c71e7437c86e863e4c555e731 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.s4behhez:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1172157519725385229.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 32a7d9dd +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 172 | n45.crusty | 172.19.2.45 | crusty | 3733 | Deployed | 32a7d9dd | None | None | 7 | x86_64 | 1 | 2440 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 28 00:37:09 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 28 Jun 2019 00:37:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #212 In-Reply-To: <980766027.2335.1561596054120.JavaMail.jenkins@jenkins.ci.centos.org> References: <980766027.2335.1561596054120.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <99052285.2452.1561682230152.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [prasanna.kalever] gluster-block: add gperftools-devel as dependency ------------------------------------------ [...truncated 287.43 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 28 June 2019 01:36:44 +0100 (0:00:00.131) 0:02:01.232 *********** TASK [container-engine/docker : check length of search domains] **************** Friday 28 June 2019 01:36:44 +0100 (0:00:00.129) 0:02:01.361 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Friday 28 June 2019 01:36:44 +0100 (0:00:00.132) 0:02:01.494 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 28 June 2019 01:36:44 +0100 (0:00:00.124) 0:02:01.618 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 28 June 2019 01:36:44 +0100 (0:00:00.244) 0:02:01.863 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 28 June 2019 01:36:45 +0100 (0:00:00.620) 0:02:02.483 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 28 June 2019 01:36:45 +0100 (0:00:00.110) 0:02:02.594 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 28 June 2019 01:36:45 +0100 (0:00:00.109) 0:02:02.703 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 28 June 2019 01:36:45 +0100 (0:00:00.137) 0:02:02.840 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 28 June 2019 01:36:45 +0100 (0:00:00.145) 0:02:02.986 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 28 June 2019 01:36:45 +0100 (0:00:00.121) 0:02:03.107 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 28 June 2019 01:36:46 +0100 (0:00:00.122) 0:02:03.230 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 28 June 2019 01:36:46 +0100 (0:00:00.123) 0:02:03.354 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 28 June 2019 01:36:46 +0100 (0:00:00.125) 0:02:03.479 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 28 June 2019 01:36:46 +0100 (0:00:00.160) 0:02:03.639 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 28 June 2019 01:36:46 +0100 (0:00:00.151) 0:02:03.791 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 28 June 2019 01:36:46 +0100 (0:00:00.125) 0:02:03.917 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 28 June 2019 01:36:46 +0100 (0:00:00.123) 0:02:04.041 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 28 June 2019 01:36:47 +0100 (0:00:00.124) 0:02:04.166 *********** ok: [kube2] ok: [kube3] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 28 June 2019 01:36:47 +0100 (0:00:00.882) 0:02:05.048 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 28 June 2019 01:36:48 +0100 (0:00:00.573) 0:02:05.622 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 28 June 2019 01:36:48 +0100 (0:00:00.124) 0:02:05.746 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 28 June 2019 01:36:49 +0100 (0:00:00.451) 0:02:06.198 *********** TASK [container-engine/docker : get systemd version] *************************** Friday 28 June 2019 01:36:49 +0100 (0:00:00.148) 0:02:06.347 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 28 June 2019 01:36:49 +0100 (0:00:00.141) 0:02:06.488 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 28 June 2019 01:36:49 +0100 (0:00:00.144) 0:02:06.633 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 28 June 2019 01:36:50 +0100 (0:00:00.973) 0:02:07.607 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 28 June 2019 01:36:51 +0100 (0:00:00.966) 0:02:08.573 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 28 June 2019 01:36:51 +0100 (0:00:00.140) 0:02:08.714 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 28 June 2019 01:36:51 +0100 (0:00:00.125) 0:02:08.839 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 28 June 2019 01:36:52 +0100 (0:00:00.434) 0:02:09.273 *********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 28 June 2019 01:36:52 +0100 (0:00:00.518) 0:02:09.791 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 28 June 2019 01:36:52 +0100 (0:00:00.130) 0:02:09.922 *********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 28 June 2019 01:36:55 +0100 (0:00:03.081) 0:02:13.003 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 28 June 2019 01:37:05 +0100 (0:00:10.077) 0:02:23.081 *********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 28 June 2019 01:37:06 +0100 (0:00:00.530) 0:02:23.611 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 28 June 2019 01:37:07 +0100 (0:00:00.558) 0:02:24.169 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 28 June 2019 01:37:07 +0100 (0:00:00.216) 0:02:24.386 *********** ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 28 June 2019 01:37:07 +0100 (0:00:00.601) 0:02:24.987 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 28 June 2019 01:37:08 +0100 (0:00:00.469) 0:02:25.457 *********** TASK [download : Download items] *********************************************** Friday 28 June 2019 01:37:08 +0100 (0:00:00.070) 0:02:25.528 *********** fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 28 June 2019 01:37:09 +0100 (0:00:01.416) 0:02:26.944 *********** =============================================================================== Install packages ------------------------------------------------------- 25.10s Extend root VG --------------------------------------------------------- 18.47s Wait for host to be available ------------------------------------------ 16.27s container-engine/docker : Docker | pause while Docker restarts --------- 10.08s gather facts from all instances ----------------------------------------- 9.93s container-engine/docker : Docker | reload docker ------------------------ 3.08s Persist loaded modules -------------------------------------------------- 2.87s kubernetes/preinstall : Create kubernetes directories ------------------- 1.94s Load required kernel modules -------------------------------------------- 1.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.61s Extend the root LV and FS to occupy remaining space --------------------- 1.51s download : Download items ----------------------------------------------- 1.42s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.39s Gathering Facts --------------------------------------------------------- 1.24s kubernetes/preinstall : Create cni directories -------------------------- 1.17s download : Sync container ----------------------------------------------- 1.16s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.14s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.11s download : Download items ----------------------------------------------- 1.08s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.01s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri Jun 28 01:09:17 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 28 Jun 2019 01:09:17 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #237 In-Reply-To: <1672101408.2341.1561598590621.JavaMail.jenkins@jenkins.ci.centos.org> References: <1672101408.2341.1561598590621.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1729997130.2456.1561684157089.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [prasanna.kalever] gluster-block: add gperftools-devel as dependency ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 29 00:16:07 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 29 Jun 2019 00:16:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #409 In-Reply-To: <2143336994.2451.1561680960082.JavaMail.jenkins@jenkins.ci.centos.org> References: <2143336994.2451.1561680960082.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2058339214.2517.1561767367661.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.65 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1953 0 --:--:-- --:--:-- --:--:-- 1964 0 8513k 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 13.9M 0 --:--:-- --:--:-- --:--:-- 63.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1931 0 --:--:-- --:--:-- --:--:-- 1935 97 38.3M 97 37.4M 0 0 44.7M 0 --:--:-- --:--:-- --:--:-- 44.7M100 38.3M 100 38.3M 0 0 45.4M 0 --:--:-- --:--:-- --:--:-- 104M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 569 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1752 0 --:--:-- --:--:-- --:--:-- 1752 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 16.5M 0 --:--:-- --:--:-- --:--:-- 74.0M ~/nightlyrpmeYIOMs/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmeYIOMs/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmeYIOMs/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmeYIOMs ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmeYIOMs/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmeYIOMs/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M c0f6bb793b6c4275967db15a0e94a003 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.sg5y0uc2:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6160251391459568125.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0cd557e5 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 151 | n24.crusty | 172.19.2.24 | crusty | 3739 | Deployed | 0cd557e5 | None | None | 7 | x86_64 | 1 | 2230 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 29 00:41:04 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 29 Jun 2019 00:41:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #213 In-Reply-To: <99052285.2452.1561682230152.JavaMail.jenkins@jenkins.ci.centos.org> References: <99052285.2452.1561682230152.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1170545913.2518.1561768864104.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.51 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 29 June 2019 01:40:22 +0100 (0:00:00.294) 0:03:03.871 ********* TASK [container-engine/docker : check length of search domains] **************** Saturday 29 June 2019 01:40:22 +0100 (0:00:00.290) 0:03:04.161 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 29 June 2019 01:40:22 +0100 (0:00:00.291) 0:03:04.453 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 29 June 2019 01:40:23 +0100 (0:00:00.311) 0:03:04.764 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 29 June 2019 01:40:23 +0100 (0:00:00.645) 0:03:05.410 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 29 June 2019 01:40:25 +0100 (0:00:01.289) 0:03:06.699 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 29 June 2019 01:40:25 +0100 (0:00:00.252) 0:03:06.952 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 29 June 2019 01:40:25 +0100 (0:00:00.247) 0:03:07.200 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 29 June 2019 01:40:25 +0100 (0:00:00.303) 0:03:07.503 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 29 June 2019 01:40:26 +0100 (0:00:00.296) 0:03:07.799 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 29 June 2019 01:40:26 +0100 (0:00:00.282) 0:03:08.082 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 29 June 2019 01:40:26 +0100 (0:00:00.274) 0:03:08.357 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 29 June 2019 01:40:27 +0100 (0:00:00.270) 0:03:08.627 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 29 June 2019 01:40:27 +0100 (0:00:00.281) 0:03:08.909 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 29 June 2019 01:40:27 +0100 (0:00:00.346) 0:03:09.256 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 29 June 2019 01:40:27 +0100 (0:00:00.325) 0:03:09.582 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 29 June 2019 01:40:28 +0100 (0:00:00.284) 0:03:09.867 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 29 June 2019 01:40:28 +0100 (0:00:00.284) 0:03:10.151 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 29 June 2019 01:40:28 +0100 (0:00:00.285) 0:03:10.437 ********* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 29 June 2019 01:40:30 +0100 (0:00:01.959) 0:03:12.396 ********* ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 29 June 2019 01:40:31 +0100 (0:00:01.135) 0:03:13.532 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 29 June 2019 01:40:32 +0100 (0:00:00.281) 0:03:13.813 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 29 June 2019 01:40:33 +0100 (0:00:00.982) 0:03:14.796 ********* TASK [container-engine/docker : get systemd version] *************************** Saturday 29 June 2019 01:40:33 +0100 (0:00:00.295) 0:03:15.092 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 29 June 2019 01:40:33 +0100 (0:00:00.293) 0:03:15.386 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 29 June 2019 01:40:34 +0100 (0:00:00.356) 0:03:15.742 ********* changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 29 June 2019 01:40:36 +0100 (0:00:01.990) 0:03:17.733 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 29 June 2019 01:40:38 +0100 (0:00:02.182) 0:03:19.916 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 29 June 2019 01:40:38 +0100 (0:00:00.424) 0:03:20.340 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 29 June 2019 01:40:39 +0100 (0:00:00.269) 0:03:20.610 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 29 June 2019 01:40:40 +0100 (0:00:01.013) 0:03:21.624 ********* changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 29 June 2019 01:40:41 +0100 (0:00:01.050) 0:03:22.674 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 29 June 2019 01:40:41 +0100 (0:00:00.313) 0:03:22.988 ********* changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 29 June 2019 01:40:45 +0100 (0:00:04.136) 0:03:27.124 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 29 June 2019 01:40:55 +0100 (0:00:10.180) 0:03:37.305 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 29 June 2019 01:40:57 +0100 (0:00:01.326) 0:03:38.632 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 29 June 2019 01:40:58 +0100 (0:00:01.168) 0:03:39.800 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 29 June 2019 01:40:58 +0100 (0:00:00.521) 0:03:40.321 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 29 June 2019 01:40:59 +0100 (0:00:01.175) 0:03:41.497 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 29 June 2019 01:41:00 +0100 (0:00:00.968) 0:03:42.465 ********* TASK [download : Download items] *********************************************** Saturday 29 June 2019 01:41:00 +0100 (0:00:00.123) 0:03:42.588 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 29 June 2019 01:41:03 +0100 (0:00:02.710) 0:03:45.299 ********* =============================================================================== Install packages ------------------------------------------------------- 35.03s Wait for host to be available ------------------------------------------ 24.01s gather facts from all instances ---------------------------------------- 17.87s container-engine/docker : Docker | pause while Docker restarts --------- 10.18s Persist loaded modules -------------------------------------------------- 5.76s container-engine/docker : Docker | reload docker ------------------------ 4.14s kubernetes/preinstall : Create kubernetes directories ------------------- 4.06s download : Download items ----------------------------------------------- 2.71s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.69s Load required kernel modules -------------------------------------------- 2.67s kubernetes/preinstall : Create cni directories -------------------------- 2.55s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.45s Extend root VG ---------------------------------------------------------- 2.31s kubernetes/preinstall : Enable ip forwarding ---------------------------- 2.24s container-engine/docker : Write docker dns systemd drop-in -------------- 2.18s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.14s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.12s Gathering Facts --------------------------------------------------------- 2.12s download : Sync container ----------------------------------------------- 2.07s download : Download items ----------------------------------------------- 2.05s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 29 01:22:58 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 29 Jun 2019 01:22:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #238 In-Reply-To: <1729997130.2456.1561684157089.JavaMail.jenkins@jenkins.ci.centos.org> References: <1729997130.2456.1561684157089.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <928701667.2523.1561771378646.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.45 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 30 00:15:59 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 30 Jun 2019 00:15:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #410 In-Reply-To: <2058339214.2517.1561767367661.JavaMail.jenkins@jenkins.ci.centos.org> References: <2058339214.2517.1561767367661.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2046829707.2547.1561853759849.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 38.64 KB...] Transaction test succeeded Running transaction Installing : python36-libs-3.6.8-1.el7.x86_64 1/52 Installing : python36-3.6.8-1.el7.x86_64 2/52 Installing : apr-1.4.8-3.el7_4.1.x86_64 3/52 Installing : mpfr-3.1.1-4.el7.x86_64 4/52 Installing : libmpc-1.0.1-3.el7.x86_64 5/52 Installing : apr-util-1.5.2-6.el7.x86_64 6/52 Installing : python36-six-1.11.0-3.el7.noarch 7/52 Installing : cpp-4.8.5-36.el7_6.2.x86_64 8/52 Installing : python36-idna-2.7-2.el7.noarch 9/52 Installing : python36-pysocks-1.6.8-6.el7.noarch 10/52 Installing : python36-urllib3-1.19.1-5.el7.noarch 11/52 Installing : python36-pyroute2-0.4.13-2.el7.noarch 12/52 Installing : python36-setuptools-39.2.0-3.el7.noarch 13/52 Installing : python36-chardet-2.3.0-6.el7.noarch 14/52 Installing : python36-requests-2.12.5-3.el7.noarch 15/52 Installing : python36-distro-1.2.0-3.el7.noarch 16/52 Installing : python36-markupsafe-0.23-3.el7.x86_64 17/52 Installing : python36-jinja2-2.8.1-2.el7.noarch 18/52 Installing : python36-rpm-4.11.3-4.el7.x86_64 19/52 Installing : elfutils-0.172-2.el7.x86_64 20/52 Installing : unzip-6.0-19.el7.x86_64 21/52 Installing : dwz-0.11-3.el7.x86_64 22/52 Installing : bzip2-1.0.6-13.el7.x86_64 23/52 Installing : usermode-1.111-5.el7.x86_64 24/52 Installing : pakchois-0.4-10.el7.x86_64 25/52 Installing : distribution-gpg-keys-1.31-1.el7.noarch 26/52 Installing : mock-core-configs-30.4-1.el7.noarch 27/52 Installing : patch-2.7.1-10.el7_5.x86_64 28/52 Installing : libmodman-2.0.1-8.el7.x86_64 29/52 Installing : libproxy-0.4.11-11.el7.x86_64 30/52 Installing : gdb-7.6.1-114.el7.x86_64 31/52 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/52 Installing : perl-srpm-macros-1-8.el7.noarch 33/52 Installing : pigz-2.3.4-1.el7.x86_64 34/52 Installing : golang-src-1.11.5-1.el7.noarch 35/52 Installing : kernel-headers-3.10.0-957.21.3.el7.x86_64 36/52 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 37/52 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 38/52 Installing : gcc-4.8.5-36.el7_6.2.x86_64 39/52 Installing : nettle-2.7.1-8.el7.x86_64 40/52 Installing : zip-3.0-11.el7.x86_64 41/52 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 42/52 Installing : mercurial-2.6.2-8.el7_4.x86_64 43/52 Installing : trousers-0.3.14-2.el7.x86_64 44/52 Installing : gnutls-3.3.29-9.el7_6.x86_64 45/52 Installing : neon-0.30.0-3.el7.x86_64 46/52 Installing : subversion-libs-1.7.14-14.el7.x86_64 47/52 Installing : subversion-1.7.14-14.el7.x86_64 48/52 Installing : golang-1.11.5-1.el7.x86_64 49/52 Installing : golang-bin-1.11.5-1.el7.x86_64 50/52 Installing : rpm-build-4.11.3-35.el7.x86_64 51/52 Installing : mock-1.4.16-1.el7.noarch 52/52 Verifying : trousers-0.3.14-2.el7.x86_64 1/52 Verifying : python36-idna-2.7-2.el7.noarch 2/52 Verifying : rpm-build-4.11.3-35.el7.x86_64 3/52 Verifying : python36-pysocks-1.6.8-6.el7.noarch 4/52 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/52 Verifying : zip-3.0-11.el7.x86_64 6/52 Verifying : python36-3.6.8-1.el7.x86_64 7/52 Verifying : subversion-libs-1.7.14-14.el7.x86_64 8/52 Verifying : python36-urllib3-1.19.1-5.el7.noarch 9/52 Verifying : nettle-2.7.1-8.el7.x86_64 10/52 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 11/52 Verifying : kernel-headers-3.10.0-957.21.3.el7.x86_64 12/52 Verifying : golang-src-1.11.5-1.el7.noarch 13/52 Verifying : python36-pyroute2-0.4.13-2.el7.noarch 14/52 Verifying : pigz-2.3.4-1.el7.x86_64 15/52 Verifying : perl-srpm-macros-1-8.el7.noarch 16/52 Verifying : golang-1.11.5-1.el7.x86_64 17/52 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 18/52 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 19/52 Verifying : golang-bin-1.11.5-1.el7.x86_64 20/52 Verifying : gdb-7.6.1-114.el7.x86_64 21/52 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 22/52 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/52 Verifying : mock-1.4.16-1.el7.noarch 24/52 Verifying : libmodman-2.0.1-8.el7.x86_64 25/52 Verifying : python36-setuptools-39.2.0-3.el7.noarch 26/52 Verifying : mpfr-3.1.1-4.el7.x86_64 27/52 Verifying : python36-six-1.11.0-3.el7.noarch 28/52 Verifying : apr-util-1.5.2-6.el7.x86_64 29/52 Verifying : python36-chardet-2.3.0-6.el7.noarch 30/52 Verifying : patch-2.7.1-10.el7_5.x86_64 31/52 Verifying : distribution-gpg-keys-1.31-1.el7.noarch 32/52 Verifying : pakchois-0.4-10.el7.x86_64 33/52 Verifying : usermode-1.111-5.el7.x86_64 34/52 Verifying : apr-1.4.8-3.el7_4.1.x86_64 35/52 Verifying : libproxy-0.4.11-11.el7.x86_64 36/52 Verifying : mock-core-configs-30.4-1.el7.noarch 37/52 Verifying : neon-0.30.0-3.el7.x86_64 38/52 Verifying : bzip2-1.0.6-13.el7.x86_64 39/52 Verifying : subversion-1.7.14-14.el7.x86_64 40/52 Verifying : python36-distro-1.2.0-3.el7.noarch 41/52 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 42/52 Verifying : dwz-0.11-3.el7.x86_64 43/52 Verifying : unzip-6.0-19.el7.x86_64 44/52 Verifying : python36-markupsafe-0.23-3.el7.x86_64 45/52 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/52 Verifying : python36-requests-2.12.5-3.el7.noarch 47/52 Verifying : python36-jinja2-2.8.1-2.el7.noarch 48/52 Verifying : python36-libs-3.6.8-1.el7.x86_64 49/52 Verifying : elfutils-0.172-2.el7.x86_64 50/52 Verifying : python36-rpm-4.11.3-4.el7.x86_64 51/52 Verifying : libmpc-1.0.1-3.el7.x86_64 52/52 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.16-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.31-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.21.3.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.4-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python36.x86_64 0:3.6.8-1.el7 python36-chardet.noarch 0:2.3.0-6.el7 python36-distro.noarch 0:1.2.0-3.el7 python36-idna.noarch 0:2.7-2.el7 python36-jinja2.noarch 0:2.8.1-2.el7 python36-libs.x86_64 0:3.6.8-1.el7 python36-markupsafe.x86_64 0:0.23-3.el7 python36-pyroute2.noarch 0:0.4.13-2.el7 python36-pysocks.noarch 0:1.6.8-6.el7 python36-requests.noarch 0:2.12.5-3.el7 python36-rpm.x86_64 0:4.11.3-4.el7 python36-setuptools.noarch 0:39.2.0-3.el7 python36-six.noarch 0:1.11.0-3.el7 python36-urllib3.noarch 0:1.19.1-5.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1939 0 --:--:-- --:--:-- --:--:-- 1939 100 8513k 100 8513k 0 0 11.3M 0 --:--:-- --:--:-- --:--:-- 11.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2108 0 --:--:-- --:--:-- --:--:-- 2104 100 38.3M 100 38.3M 0 0 43.8M 0 --:--:-- --:--:-- --:--:-- 43.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 578 0 --:--:-- --:--:-- --:--:-- 579 0 0 0 620 0 0 1761 0 --:--:-- --:--:-- --:--:-- 1761 100 10.7M 100 10.7M 0 0 13.8M 0 --:--:-- --:--:-- --:--:-- 13.8M ~/nightlyrpmNGlSar/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmNGlSar/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmNGlSar/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmNGlSar ~ INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmNGlSar/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.16 INFO: Mock Version: 1.4.16 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmNGlSar/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 5327044c23bf479eb90fac9edc4b1227 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.opmc1eyj:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3588390566353353149.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9737838b +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 135 | n8.crusty | 172.19.2.8 | crusty | 3743 | Deployed | 9737838b | None | None | 7 | x86_64 | 1 | 2070 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 30 00:37:19 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 30 Jun 2019 00:37:19 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #214 In-Reply-To: <1170545913.2518.1561768864104.JavaMail.jenkins@jenkins.ci.centos.org> References: <1170545913.2518.1561768864104.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <758204498.2548.1561855039961.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.46 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 30 June 2019 01:36:53 +0100 (0:00:00.131) 0:02:01.436 *********** TASK [container-engine/docker : check length of search domains] **************** Sunday 30 June 2019 01:36:53 +0100 (0:00:00.133) 0:02:01.570 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 30 June 2019 01:36:53 +0100 (0:00:00.128) 0:02:01.699 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 30 June 2019 01:36:54 +0100 (0:00:00.122) 0:02:01.821 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 30 June 2019 01:36:54 +0100 (0:00:00.245) 0:02:02.066 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 30 June 2019 01:36:54 +0100 (0:00:00.627) 0:02:02.694 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.115) 0:02:02.809 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.115) 0:02:02.925 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.142) 0:02:03.067 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 30 June 2019 01:36:55 +0100 (0:00:00.140) 0:02:03.207 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.124) 0:02:03.332 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 30 June 2019 01:36:55 +0100 (0:00:00.122) 0:02:03.454 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.128) 0:02:03.583 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 30 June 2019 01:36:55 +0100 (0:00:00.130) 0:02:03.714 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 30 June 2019 01:36:56 +0100 (0:00:00.163) 0:02:03.878 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 30 June 2019 01:36:56 +0100 (0:00:00.151) 0:02:04.029 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 30 June 2019 01:36:56 +0100 (0:00:00.129) 0:02:04.159 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 30 June 2019 01:36:56 +0100 (0:00:00.132) 0:02:04.291 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 30 June 2019 01:36:56 +0100 (0:00:00.130) 0:02:04.422 *********** ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 30 June 2019 01:36:57 +0100 (0:00:00.882) 0:02:05.305 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 30 June 2019 01:36:58 +0100 (0:00:00.543) 0:02:05.848 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 30 June 2019 01:36:58 +0100 (0:00:00.128) 0:02:05.977 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 30 June 2019 01:36:58 +0100 (0:00:00.556) 0:02:06.534 *********** TASK [container-engine/docker : get systemd version] *************************** Sunday 30 June 2019 01:36:58 +0100 (0:00:00.146) 0:02:06.681 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 30 June 2019 01:36:59 +0100 (0:00:00.133) 0:02:06.815 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 30 June 2019 01:36:59 +0100 (0:00:00.146) 0:02:06.961 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 30 June 2019 01:37:00 +0100 (0:00:01.003) 0:02:07.965 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 30 June 2019 01:37:01 +0100 (0:00:00.935) 0:02:08.900 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 30 June 2019 01:37:01 +0100 (0:00:00.144) 0:02:09.044 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 30 June 2019 01:37:01 +0100 (0:00:00.117) 0:02:09.161 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 30 June 2019 01:37:01 +0100 (0:00:00.463) 0:02:09.624 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 30 June 2019 01:37:02 +0100 (0:00:00.526) 0:02:10.151 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 30 June 2019 01:37:02 +0100 (0:00:00.124) 0:02:10.275 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 30 June 2019 01:37:05 +0100 (0:00:03.084) 0:02:13.360 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 30 June 2019 01:37:15 +0100 (0:00:10.103) 0:02:23.464 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 30 June 2019 01:37:16 +0100 (0:00:00.578) 0:02:24.042 *********** ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 30 June 2019 01:37:16 +0100 (0:00:00.641) 0:02:24.684 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 30 June 2019 01:37:17 +0100 (0:00:00.215) 0:02:24.899 *********** ok: [kube3] ok: [kube1] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 30 June 2019 01:37:17 +0100 (0:00:00.609) 0:02:25.509 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 30 June 2019 01:37:18 +0100 (0:00:00.457) 0:02:25.966 *********** TASK [download : Download items] *********************************************** Sunday 30 June 2019 01:37:18 +0100 (0:00:00.067) 0:02:26.033 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 30 June 2019 01:37:19 +0100 (0:00:01.340) 0:02:27.374 *********** =============================================================================== Install packages ------------------------------------------------------- 26.47s Wait for host to be available ------------------------------------------ 16.18s Extend root VG --------------------------------------------------------- 15.84s gather facts from all instances ---------------------------------------- 10.59s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s Persist loaded modules -------------------------------------------------- 3.48s container-engine/docker : Docker | reload docker ------------------------ 3.08s kubernetes/preinstall : Create kubernetes directories ------------------- 1.94s Extend the root LV and FS to occupy remaining space --------------------- 1.72s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.56s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.46s Load required kernel modules -------------------------------------------- 1.45s download : Download items ----------------------------------------------- 1.34s kubernetes/preinstall : Create cni directories -------------------------- 1.32s Gathering Facts --------------------------------------------------------- 1.25s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.23s download : Download items ----------------------------------------------- 1.21s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.16s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.02s container-engine/docker : Write docker options systemd drop-in ---------- 1.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun Jun 30 01:11:12 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 30 Jun 2019 01:11:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #239 In-Reply-To: <928701667.2523.1561771378646.JavaMail.jenkins@jenkins.ci.centos.org> References: <928701667.2523.1561771378646.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2045843343.2552.1561857072176.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.51 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat Jun 8 00:38:50 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 08 Jun 2019 00:38:50 -0000 Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #192 In-Reply-To: <940507328.247.1559867830586.JavaMail.jenkins@jenkins.ci.centos.org> References: <940507328.247.1559867830586.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <862453831.296.1559954329106.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 243.13 KB...] [WARNING]: Unhandled error in Python interpreter discovery for host kube2: Failed to connect to the host via ssh: ssh: Could not resolve hostname kube2: Name or service not known fatal: [kube2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true} [WARNING]: The value 4 (type int) in a string field was converted to u'4' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change. changed: [kube1] changed: [kube3] TASK [Extend the root LV and FS to occupy remaining space] ********************* Saturday 08 June 2019 01:37:16 +0100 (0:00:02.373) 0:00:02.589 ********* changed: [kube1] changed: [kube3] TASK [Load required kernel modules] ******************************************** Saturday 08 June 2019 01:37:18 +0100 (0:00:01.795) 0:00:04.385 ********* ok: [kube1] => (item=dm_mirror) ok: [kube3] => (item=dm_mirror) changed: [kube3] => (item=dm_snapshot) changed: [kube1] => (item=dm_snapshot) changed: [kube3] => (item=dm_thin_pool) changed: [kube1] => (item=dm_thin_pool) TASK [Persist loaded modules] ************************************************** Saturday 08 June 2019 01:37:21 +0100 (0:00:02.589) 0:00:06.975 ********* changed: [kube3] => (item=dm_mirror) changed: [kube1] => (item=dm_mirror) changed: [kube3] => (item=dm_snapshot) changed: [kube1] => (item=dm_snapshot) changed: [kube3] => (item=dm_thin_pool) changed: [kube1] => (item=dm_thin_pool) TASK [Install packages] ******************************************************** Saturday 08 June 2019 01:37:27 +0100 (0:00:05.831) 0:00:12.806 ********* changed: [kube3] => (item=socat) changed: [kube1] => (item=socat) TASK [Reboot to make layered packages available] ******************************* Saturday 08 June 2019 01:37:59 +0100 (0:00:32.629) 0:00:45.436 ********* changed: [kube3] changed: [kube1] TASK [Wait for host to be available] ******************************************* Saturday 08 June 2019 01:38:01 +0100 (0:00:01.756) 0:00:47.193 ********* ok: [kube1] ok: [kube3] PLAY [localhost] *************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: bastion PLAY [bastion[0]] ************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: calico-rr PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [download : include_tasks] ************************************************ Saturday 08 June 2019 01:38:22 +0100 (0:00:21.282) 0:01:08.475 ********* TASK [download : Download items] *********************************************** Saturday 08 June 2019 01:38:22 +0100 (0:00:00.081) 0:01:08.557 ********* TASK [download : Sync container] *********************************************** Saturday 08 June 2019 01:38:23 +0100 (0:00:00.313) 0:01:08.870 ********* TASK [download : include_tasks] ************************************************ Saturday 08 June 2019 01:38:23 +0100 (0:00:00.417) 0:01:09.288 ********* TASK [kubespray-defaults : Configure defaults] ********************************* Saturday 08 June 2019 01:38:23 +0100 (0:00:00.111) 0:01:09.399 ********* ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube3] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [bootstrap-os : Fetch /etc/os-release] ************************************ Saturday 08 June 2019 01:38:23 +0100 (0:00:00.097) 0:01:09.496 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:24 +0100 (0:00:00.305) 0:01:09.802 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:24 +0100 (0:00:00.099) 0:01:09.901 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:24 +0100 (0:00:00.083) 0:01:09.984 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:24 +0100 (0:00:00.069) 0:01:10.054 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:24 +0100 (0:00:00.071) 0:01:10.125 ********* included: /root/gcs/deploy/kubespray/roles/bootstrap-os/tasks/bootstrap-centos.yml for kube1, kube3 TASK [bootstrap-os : check if atomic host] ************************************* Saturday 08 June 2019 01:38:24 +0100 (0:00:00.168) 0:01:10.294 ********* ok: [kube3] ok: [kube1] TASK [bootstrap-os : set_fact] ************************************************* Saturday 08 June 2019 01:38:25 +0100 (0:00:01.160) 0:01:11.455 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : Check presence of fastestmirror.conf] ********************* Saturday 08 June 2019 01:38:25 +0100 (0:00:00.105) 0:01:11.560 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : Disable fastestmirror plugin] ***************************** Saturday 08 June 2019 01:38:26 +0100 (0:00:00.992) 0:01:12.553 ********* changed: [kube1] changed: [kube3] TASK [bootstrap-os : Add proxy to /etc/yum.conf if http_proxy is defined] ****** Saturday 08 June 2019 01:38:28 +0100 (0:00:01.812) 0:01:14.365 ********* TASK [bootstrap-os : Install libselinux-python and yum-utils for bootstrap] **** Saturday 08 June 2019 01:38:28 +0100 (0:00:00.078) 0:01:14.443 ********* TASK [bootstrap-os : Check python-pip package] ********************************* Saturday 08 June 2019 01:38:28 +0100 (0:00:00.076) 0:01:14.520 ********* TASK [bootstrap-os : Install epel-release for bootstrap] *********************** Saturday 08 June 2019 01:38:28 +0100 (0:00:00.077) 0:01:14.597 ********* TASK [bootstrap-os : Install pip for bootstrap] ******************************** Saturday 08 June 2019 01:38:28 +0100 (0:00:00.061) 0:01:14.659 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:28 +0100 (0:00:00.073) 0:01:14.732 ********* TASK [bootstrap-os : include_tasks] ******************************************** Saturday 08 June 2019 01:38:29 +0100 (0:00:00.079) 0:01:14.811 ********* TASK [bootstrap-os : Remove require tty] *************************************** Saturday 08 June 2019 01:38:29 +0100 (0:00:00.070) 0:01:14.882 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : Create remote_tmp for it is used by another module] ******* Saturday 08 June 2019 01:38:30 +0100 (0:00:01.398) 0:01:16.280 ********* changed: [kube1] changed: [kube3] TASK [bootstrap-os : Gather nodes hostnames] *********************************** Saturday 08 June 2019 01:38:32 +0100 (0:00:01.992) 0:01:18.273 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed)] *** Saturday 08 June 2019 01:38:34 +0100 (0:00:02.501) 0:01:20.775 ********* ok: [kube1] ok: [kube3] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (CoreOS and Tumbleweed only)] *** Saturday 08 June 2019 01:38:37 +0100 (0:00:02.533) 0:01:23.308 ********* TASK [bootstrap-os : Update hostname fact (CoreOS and Tumbleweed only)] ******** Saturday 08 June 2019 01:38:37 +0100 (0:00:00.124) 0:01:23.432 ********* PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [Gathering Facts] ********************************************************* Saturday 08 June 2019 01:38:37 +0100 (0:00:00.115) 0:01:23.548 ********* ok: [kube1] ok: [kube3] TASK [gather facts from all instances] ***************************************** Saturday 08 June 2019 01:38:39 +0100 (0:00:02.047) 0:01:25.596 ********* ok: [kube3 -> 192.168.121.206] => (item=kube1) [WARNING]: Unhandled error in Python interpreter discovery for host kube3: Failed to connect to the host via ssh: ssh: Could not resolve hostname kube2: Name or service not known failed: [kube3] (item=kube2) => {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true} ok: [kube1 -> 192.168.121.206] => (item=kube1) [WARNING]: Unhandled error in Python interpreter discovery for host kube1: Failed to connect to the host via ssh: ssh: Could not resolve hostname kube2: Name or service not known failed: [kube1] (item=kube2) => {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true} ok: [kube3 -> 192.168.121.173] => (item=kube3) ok: [kube1 -> 192.168.121.173] => (item=kube3) ok: [kube3 -> 192.168.121.206] => (item=kube1) failed: [kube3] (item=kube2) => {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true} ok: [kube1 -> 192.168.121.206] => (item=kube1) failed: [kube1] (item=kube2) => {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true} ok: [kube3 -> 192.168.121.173] => (item=kube3) fatal: [kube3]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe00:ec29"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954321", "hour": "00", "iso8601": "2019-06-08T00:38:41Z", "iso8601_basic": "20190608T003841373079", "iso8601_basic_short": "20190608T003841", "iso8601_micro": "2019-06-08T00:38:41.373245Z", "minute": "38", "month": "06", "second": "41", "time": "00:38:41", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:00:ec:29", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242df3ea2e3", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:df:3e:a2:e3", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-hrznwlougdzklryifmsjxptqlqtcifzw ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe00:ec29", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:00:ec:29", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube1", "ansible_hostname": "kube1", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2311b13a8e604ab497f5045fa50f0e62", "ansible_memfree_mb": 1468, "ansible_memory_mb": {"nocache": {"free": 1633, "used": 205}, "real": {"free": 1468, "total": 1838, "used": 370}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6620697, "block_size": 4096, "block_total": 7014912, "block_used": 394215, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004276, "inode_total": 14034944, "inode_used": 30668, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27118374912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620683, "block_size": 4096, "block_total": 7014912, "block_used": 394229, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004274, "inode_total": 14034944, "inode_used": 30670, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118317568, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620683, "block_size": 4096, "block_total": 7014912, "block_used": 394229, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004275, "inode_total": 14034944, "inode_used": 30669, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118317568, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6620670, "block_size": 4096, "block_total": 7014912, "block_used": 394242, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004271, "inode_total": 14034944, "inode_used": 30673, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118264320, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620697, "block_size": 4096, "block_total": 7014912, "block_used": 394215, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004276, "inode_total": 14034944, "inode_used": 30668, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118374912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620670, "block_size": 4096, "block_total": 7014912, "block_used": 394242, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004271, "inode_total": 14034944, "inode_used": 30673, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118264320, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube1", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2311B13A-8E60-4AB4-97F5-045FA50F0E62", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPLKuR7dDwngf/EEAxmNzKrGdQb3FEcupLKG/qnhtNMulCYoWAxBNF3AyqiY8Uk+wXCKx+bhRvNKeQdpZVm0ys=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIyxwf+XumGsMPKM/p2NTaOEoY4raBX9y7daAtt97CZW", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCuVNWF19y6tVAFC8prGl2YX58IDZ2oAOwAIACI+2bPtJhA6h7jxUC6XetUUOEnQyhIpw7GIO+2uI0ZnwRg8M9duEv65SUbeMlNA0yRoU3jcebmGKE5Oml+/pZh7p5q8E3SnGyfwT2gbIY9NSqcSaM03P2IudoiPZg2yAlK6+Igg/YKcoXL1xstBrAp4nY0TTok/VNhoRJNVGF7bkEfJbW1uXThhgj09Oq62uGhFKShfnmrQP2TeH+WapN9K2S0SCRWMId0PzoJZdfKU48zVbttg6hEMqo8JsKk12r4OPYZlujCYCraDg4GE8ApUCAymiFxT4P38weN1roj7QomBg+D", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 30, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube1"}, {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.173", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe87:7c31"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954323", "hour": "00", "iso8601": "2019-06-08T00:38:43Z", "iso8601_basic": "20190608T003843301247", "iso8601_basic_short": "20190608T003843", "iso8601_micro": "2019-06-08T00:38:43.301459Z", "minute": "38", "month": "06", "second": "43", "time": "00:38:43", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.173", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:87:7c:31", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242f6cca7a2", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:f6:cc:a7:a2", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-pkdvzuxcmsskekhfjynkjgnpciilgibi ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.173", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe87:7c31", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:87:7c:31", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2e752c00b8ab40e8bd5969993469eba7", "ansible_memfree_mb": 1487, "ansible_memory_mb": {"nocache": {"free": 1658, "used": 180}, "real": {"free": 1487, "total": 1838, "used": 351}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6619931, "block_size": 4096, "block_total": 7014912, "block_used": 394981, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27115237376, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619925, "block_size": 4096, "block_total": 7014912, "block_used": 394987, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27115212800, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619925, "block_size": 4096, "block_total": 7014912, "block_used": 394987, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27115212800, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6619925, "block_size": 4096, "block_total": 7014912, "block_used": 394987, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27115212800, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619931, "block_size": 4096, "block_total": 7014912, "block_used": 394981, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27115237376, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619925, "block_size": 4096, "block_total": 7014912, "block_used": 394987, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004230, "inode_total": 14034944, "inode_used": 30714, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27115212800, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2E752C00-B8AB-40E8-BD59-69993469EBA7", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFTuj9JzeRMbuP1/9LA/e0bm2zHbfl3xO0E5Om9TJKpvVzPFsuAiynh0pWT2z+0ri9TlreMQQk8ScMY+ULesX5Q=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIKyu+aDgqoL8NMs6gOYGj5LEXjpOTDN9qCaKD9B4qJva", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC5OcEwe14YYYaCwQwjCw0e0KvCo+8jULb2XL+oDZYpf0nXEnb1/HZCjlN8ZFhJ6/ACT11KtP/JgLdPRCenhOYGTbkKcm8E6BuGG26F+OMkT/T66qfprzc4lf9WJCr9in8nsqnTTskEcg+KTuHiZ+cBkZY3cORwHRRJQgPnOFAHU9khE/32/6suAVp4N43SYCtH+8sdra16kkQa6m5pppjPrq1LiUGNcJoOZ/J0z+1xyGcF0NeJv5zGmkecL40YdDGbgATu4lkASeFV5te/G7sRDRDmPbkSrKoi+n4raQhNINA1w78KR6NvruoRNoEgH+wRh4FTlIVh/AIAPvtsMOql", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 32, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe00:ec29"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954325", "hour": "00", "iso8601": "2019-06-08T00:38:45Z", "iso8601_basic": "20190608T003845529902", "iso8601_basic_short": "20190608T003845", "iso8601_micro": "2019-06-08T00:38:45.530103Z", "minute": "38", "month": "06", "second": "45", "time": "00:38:45", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:00:ec:29", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242df3ea2e3", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:df:3e:a2:e3", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-bimsvzthxhpxxuwpxqnskomtsyhmgkpx ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe00:ec29", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:00:ec:29", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube1", "ansible_hostname": "kube1", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2311b13a8e604ab497f5045fa50f0e62", "ansible_memfree_mb": 1477, "ansible_memory_mb": {"nocache": {"free": 1660, "used": 178}, "real": {"free": 1477, "total": 1838, "used": 361}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6616817, "block_size": 4096, "block_total": 7014912, "block_used": 398095, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27102482432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616817, "block_size": 4096, "block_total": 7014912, "block_used": 398095, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27102482432, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube1", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2311B13A-8E60-4AB4-97F5-045FA50F0E62", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPLKuR7dDwngf/EEAxmNzKrGdQb3FEcupLKG/qnhtNMulCYoWAxBNF3AyqiY8Uk+wXCKx+bhRvNKeQdpZVm0ys=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIyxwf+XumGsMPKM/p2NTaOEoY4raBX9y7daAtt97CZW", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCuVNWF19y6tVAFC8prGl2YX58IDZ2oAOwAIACI+2bPtJhA6h7jxUC6XetUUOEnQyhIpw7GIO+2uI0ZnwRg8M9duEv65SUbeMlNA0yRoU3jcebmGKE5Oml+/pZh7p5q8E3SnGyfwT2gbIY9NSqcSaM03P2IudoiPZg2yAlK6+Igg/YKcoXL1xstBrAp4nY0TTok/VNhoRJNVGF7bkEfJbW1uXThhgj09Oq62uGhFKShfnmrQP2TeH+WapN9K2S0SCRWMId0PzoJZdfKU48zVbttg6hEMqo8JsKk12r4OPYZlujCYCraDg4GE8ApUCAymiFxT4P38weN1roj7QomBg+D", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 35, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "discovered_interpreter_python": "/usr/bin/python", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube1"}, {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.173", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe87:7c31"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954327", "hour": "00", "iso8601": "2019-06-08T00:38:47Z", "iso8601_basic": "20190608T003847669473", "iso8601_basic_short": "20190608T003847", "iso8601_micro": "2019-06-08T00:38:47.669663Z", "minute": "38", "month": "06", "second": "47", "time": "00:38:47", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.173", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:87:7c:31", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242f6cca7a2", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:f6:cc:a7:a2", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-qpwtbycuqmfenwdzjvoiofnozppfcchn ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.173", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe87:7c31", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:87:7c:31", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2e752c00b8ab40e8bd5969993469eba7", "ansible_memfree_mb": 1463, "ansible_memory_mb": {"nocache": {"free": 1655, "used": 183}, "real": {"free": 1463, "total": 1838, "used": 375}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615427, "block_size": 4096, "block_total": 7014912, "block_used": 399485, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27096788992, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2E752C00-B8AB-40E8-BD59-69993469EBA7", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFTuj9JzeRMbuP1/9LA/e0bm2zHbfl3xO0E5Om9TJKpvVzPFsuAiynh0pWT2z+0ri9TlreMQQk8ScMY+ULesX5Q=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIKyu+aDgqoL8NMs6gOYGj5LEXjpOTDN9qCaKD9B4qJva", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC5OcEwe14YYYaCwQwjCw0e0KvCo+8jULb2XL+oDZYpf0nXEnb1/HZCjlN8ZFhJ6/ACT11KtP/JgLdPRCenhOYGTbkKcm8E6BuGG26F+OMkT/T66qfprzc4lf9WJCr9in8nsqnTTskEcg+KTuHiZ+cBkZY3cORwHRRJQgPnOFAHU9khE/32/6suAVp4N43SYCtH+8sdra16kkQa6m5pppjPrq1LiUGNcJoOZ/J0z+1xyGcF0NeJv5zGmkecL40YdDGbgATu4lkASeFV5te/G7sRDRDmPbkSrKoi+n4raQhNINA1w78KR6NvruoRNoEgH+wRh4FTlIVh/AIAPvtsMOql", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 37, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} ok: [kube1 -> 192.168.121.173] => (item=kube3) fatal: [kube1]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe00:ec29"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954321", "hour": "00", "iso8601": "2019-06-08T00:38:41Z", "iso8601_basic": "20190608T003841393229", "iso8601_basic_short": "20190608T003841", "iso8601_micro": "2019-06-08T00:38:41.393502Z", "minute": "38", "month": "06", "second": "41", "time": "00:38:41", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:00:ec:29", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242df3ea2e3", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:df:3e:a2:e3", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-kvmxmtzvinvgjcnhzmhjasvkrvsrambg ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe00:ec29", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:00:ec:29", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube1", "ansible_hostname": "kube1", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2311b13a8e604ab497f5045fa50f0e62", "ansible_memfree_mb": 1469, "ansible_memory_mb": {"nocache": {"free": 1634, "used": 204}, "real": {"free": 1469, "total": 1838, "used": 369}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6620717, "block_size": 4096, "block_total": 7014912, "block_used": 394195, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004279, "inode_total": 14034944, "inode_used": 30665, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27118456832, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620711, "block_size": 4096, "block_total": 7014912, "block_used": 394201, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004277, "inode_total": 14034944, "inode_used": 30667, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118432256, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620711, "block_size": 4096, "block_total": 7014912, "block_used": 394201, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004278, "inode_total": 14034944, "inode_used": 30666, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118432256, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6620704, "block_size": 4096, "block_total": 7014912, "block_used": 394208, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004276, "inode_total": 14034944, "inode_used": 30668, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118403584, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620717, "block_size": 4096, "block_total": 7014912, "block_used": 394195, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004278, "inode_total": 14034944, "inode_used": 30666, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118456832, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620697, "block_size": 4096, "block_total": 7014912, "block_used": 394215, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004276, "inode_total": 14034944, "inode_used": 30668, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118374912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube1", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2311B13A-8E60-4AB4-97F5-045FA50F0E62", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPLKuR7dDwngf/EEAxmNzKrGdQb3FEcupLKG/qnhtNMulCYoWAxBNF3AyqiY8Uk+wXCKx+bhRvNKeQdpZVm0ys=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIyxwf+XumGsMPKM/p2NTaOEoY4raBX9y7daAtt97CZW", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCuVNWF19y6tVAFC8prGl2YX58IDZ2oAOwAIACI+2bPtJhA6h7jxUC6XetUUOEnQyhIpw7GIO+2uI0ZnwRg8M9duEv65SUbeMlNA0yRoU3jcebmGKE5Oml+/pZh7p5q8E3SnGyfwT2gbIY9NSqcSaM03P2IudoiPZg2yAlK6+Igg/YKcoXL1xstBrAp4nY0TTok/VNhoRJNVGF7bkEfJbW1uXThhgj09Oq62uGhFKShfnmrQP2TeH+WapN9K2S0SCRWMId0PzoJZdfKU48zVbttg6hEMqo8JsKk12r4OPYZlujCYCraDg4GE8ApUCAymiFxT4P38weN1roj7QomBg+D", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 30, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube1"}, {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.173", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe87:7c31"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954323", "hour": "00", "iso8601": "2019-06-08T00:38:43Z", "iso8601_basic": "20190608T003843443932", "iso8601_basic_short": "20190608T003843", "iso8601_micro": "2019-06-08T00:38:43.444161Z", "minute": "38", "month": "06", "second": "43", "time": "00:38:43", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.173", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:87:7c:31", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242f6cca7a2", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:f6:cc:a7:a2", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-xtofnrnkggzvrbqlrcusljpibewlmzuy ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.173", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe87:7c31", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:87:7c:31", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2e752c00b8ab40e8bd5969993469eba7", "ansible_memfree_mb": 1480, "ansible_memory_mb": {"nocache": {"free": 1651, "used": 187}, "real": {"free": 1480, "total": 1838, "used": 358}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6619694, "block_size": 4096, "block_total": 7014912, "block_used": 395218, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004185, "inode_total": 14034944, "inode_used": 30759, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27114266624, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619674, "block_size": 4096, "block_total": 7014912, "block_used": 395238, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004183, "inode_total": 14034944, "inode_used": 30761, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27114184704, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619674, "block_size": 4096, "block_total": 7014912, "block_used": 395238, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004183, "inode_total": 14034944, "inode_used": 30761, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27114184704, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619694, "block_size": 4096, "block_total": 7014912, "block_used": 395218, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004185, "inode_total": 14034944, "inode_used": 30759, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27114266624, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6619649, "block_size": 4096, "block_total": 7014912, "block_used": 395263, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004179, "inode_total": 14034944, "inode_used": 30765, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27114082304, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6619551, "block_size": 4096, "block_total": 7014912, "block_used": 395361, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004173, "inode_total": 14034944, "inode_used": 30771, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27113680896, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2E752C00-B8AB-40E8-BD59-69993469EBA7", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFTuj9JzeRMbuP1/9LA/e0bm2zHbfl3xO0E5Om9TJKpvVzPFsuAiynh0pWT2z+0ri9TlreMQQk8ScMY+ULesX5Q=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIKyu+aDgqoL8NMs6gOYGj5LEXjpOTDN9qCaKD9B4qJva", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC5OcEwe14YYYaCwQwjCw0e0KvCo+8jULb2XL+oDZYpf0nXEnb1/HZCjlN8ZFhJ6/ACT11KtP/JgLdPRCenhOYGTbkKcm8E6BuGG26F+OMkT/T66qfprzc4lf9WJCr9in8nsqnTTskEcg+KTuHiZ+cBkZY3cORwHRRJQgPnOFAHU9khE/32/6suAVp4N43SYCtH+8sdra16kkQa6m5pppjPrq1LiUGNcJoOZ/J0z+1xyGcF0NeJv5zGmkecL40YdDGbgATu4lkASeFV5te/G7sRDRDmPbkSrKoi+n4raQhNINA1w78KR6NvruoRNoEgH+wRh4FTlIVh/AIAPvtsMOql", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 33, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe00:ec29"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954325", "hour": "00", "iso8601": "2019-06-08T00:38:45Z", "iso8601_basic": "20190608T003845693597", "iso8601_basic_short": "20190608T003845", "iso8601_micro": "2019-06-08T00:38:45.693820Z", "minute": "38", "month": "06", "second": "45", "time": "00:38:45", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:00:ec:29", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-Wj1hwx-dS1l-tU82-z8e0-xqxV-2Ndh-14zobh"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242df3ea2e3", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:df:3e:a2:e3", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "C", "LC_ALL": "C", "LC_NUMERIC": "C", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-pspjocqzubesusysmbvjfieryfxyditq ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe00:ec29", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:00:ec:29", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube1", "ansible_hostname": "kube1", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2311b13a8e604ab497f5045fa50f0e62", "ansible_memfree_mb": 1472, "ansible_memory_mb": {"nocache": {"free": 1655, "used": 183}, "real": {"free": 1472, "total": 1838, "used": 366}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6616520, "block_size": 4096, "block_total": 7014912, "block_used": 398392, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004076, "inode_total": 14034944, "inode_used": 30868, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27101265920, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube1", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2311B13A-8E60-4AB4-97F5-045FA50F0E62", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPLKuR7dDwngf/EEAxmNzKrGdQb3FEcupLKG/qnhtNMulCYoWAxBNF3AyqiY8Uk+wXCKx+bhRvNKeQdpZVm0ys=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIyxwf+XumGsMPKM/p2NTaOEoY4raBX9y7daAtt97CZW", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCuVNWF19y6tVAFC8prGl2YX58IDZ2oAOwAIACI+2bPtJhA6h7jxUC6XetUUOEnQyhIpw7GIO+2uI0ZnwRg8M9duEv65SUbeMlNA0yRoU3jcebmGKE5Oml+/pZh7p5q8E3SnGyfwT2gbIY9NSqcSaM03P2IudoiPZg2yAlK6+Igg/YKcoXL1xstBrAp4nY0TTok/VNhoRJNVGF7bkEfJbW1uXThhgj09Oq62uGhFKShfnmrQP2TeH+WapN9K2S0SCRWMId0PzoJZdfKU48zVbttg6hEMqo8JsKk12r4OPYZlujCYCraDg4GE8ApUCAymiFxT4P38weN1roj7QomBg+D", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 35, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "discovered_interpreter_python": "/usr/bin/python", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube1"}, {"ansible_loop_var": "item", "item": "kube2", "msg": "Data could not be sent to remote host \"kube2\". Make sure this host can be reached over ssh: ssh: Could not resolve hostname kube2: Name or service not known\r\n", "unreachable": true}, {"ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.173", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe87:7c31"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-06-08", "day": "08", "epoch": "1559954328", "hour": "00", "iso8601": "2019-06-08T00:38:48Z", "iso8601_basic": "20190608T003848045250", "iso8601_basic_short": "20190608T003848", "iso8601_micro": "2019-06-08T00:38:48.045440Z", "minute": "38", "month": "06", "second": "48", "time": "00:38:48", "tz": "UTC", "tz_offset": "+0000", "weekday": "Saturday", "weekday_number": "6", "weeknumber": "22", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.173", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:87:7c:31", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-YhbqDz-0W2G-eGMk-CIRu-E6EM-77tY-x6ge4x"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242f6cca7a2", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:f6:cc:a7:a2", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-sshsgjtdoafryapiitostbrkvqiaftoq ; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.173", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe87:7c31", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:87:7c:31", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "2e752c00b8ab40e8bd5969993469eba7", "ansible_memfree_mb": 1455, "ansible_memory_mb": {"nocache": {"free": 1648, "used": 190}, "real": {"free": 1455, "total": 1838, "used": 383}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6614977, "block_size": 4096, "block_total": 7014912, "block_used": 399935, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004060, "inode_total": 14034944, "inode_used": 30884, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27094945792, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": ["tty1", "ttyS0,115200n8"], "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "2E752C00-B8AB-40E8-BD59-69993469EBA7", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFTuj9JzeRMbuP1/9LA/e0bm2zHbfl3xO0E5Om9TJKpvVzPFsuAiynh0pWT2z+0ri9TlreMQQk8ScMY+ULesX5Q=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIKyu+aDgqoL8NMs6gOYGj5LEXjpOTDN9qCaKD9B4qJva", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC5OcEwe14YYYaCwQwjCw0e0KvCo+8jULb2XL+oDZYpf0nXEnb1/HZCjlN8ZFhJ6/ACT11KtP/JgLdPRCenhOYGTbkKcm8E6BuGG26F+OMkT/T66qfprzc4lf9WJCr9in8nsqnTTskEcg+KTuHiZ+cBkZY3cORwHRRJQgPnOFAHU9khE/32/6suAVp4N43SYCtH+8sdra16kkQa6m5pppjPrq1LiUGNcJoOZ/J0z+1xyGcF0NeJv5zGmkecL40YdDGbgATu4lkASeFV5te/G7sRDRDmPbkSrKoi+n4raQhNINA1w78KR6NvruoRNoEgH+wRh4FTlIVh/AIAPvtsMOql", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 37, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* kube1 : ok=19 changed=8 unreachable=1 failed=0 skipped=17 rescued=0 ignored=0 kube2 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 kube3 : ok=19 changed=8 unreachable=1 failed=0 skipped=17 rescued=0 ignored=0 Saturday 08 June 2019 01:38:48 +0100 (0:00:08.696) 0:01:34.292 ********* =============================================================================== Install packages ------------------------------------------------------- 32.63s Wait for host to be available ------------------------------------------ 21.28s gather facts from all instances ----------------------------------------- 8.70s Persist loaded modules -------------------------------------------------- 5.83s Load required kernel modules -------------------------------------------- 2.59s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.53s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.50s Extend root VG ---------------------------------------------------------- 2.37s Gathering Facts --------------------------------------------------------- 2.05s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.99s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.81s Extend the root LV and FS to occupy remaining space --------------------- 1.80s Reboot to make layered packages available ------------------------------- 1.76s bootstrap-os : Remove require tty --------------------------------------- 1.40s bootstrap-os : check if atomic host ------------------------------------- 1.16s bootstrap-os : Check presence of fastestmirror.conf --------------------- 0.99s download : Sync container ----------------------------------------------- 0.42s download : Download items ----------------------------------------------- 0.31s bootstrap-os : Fetch /etc/os-release ------------------------------------ 0.31s bootstrap-os : include_tasks -------------------------------------------- 0.17s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0