From ci at centos.org Wed May 1 00:13:44 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 1 May 2019 00:13:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #350 In-Reply-To: <1956162420.2823.1556583233011.JavaMail.jenkins@jenkins.ci.centos.org> References: <1956162420.2823.1556583233011.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1723618681.2901.1556669624454.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.79 KB...] Total 92 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : python2-distro-1.2.0-1.el7.noarch 14/49 Installing : patch-2.7.1-10.el7_5.x86_64 15/49 Installing : python-backports-1.0-8.el7.x86_64 16/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 17/49 Installing : python-urllib3-1.10.2-5.el7.noarch 18/49 Installing : python-requests-2.6.0-1.el7_1.noarch 19/49 Installing : python-babel-0.9.6-8.el7.noarch 20/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : libmodman-2.0.1-8.el7.x86_64 23/49 Installing : libproxy-0.4.11-11.el7.x86_64 24/49 Installing : python-markupsafe-0.11-10.el7.x86_64 25/49 Installing : python-jinja2-2.7.2-2.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2843 0 --:--:-- --:--:-- --:--:-- 2853 100 8513k 100 8513k 0 0 19.5M 0 --:--:-- --:--:-- --:--:-- 19.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3103 0 --:--:-- --:--:-- --:--:-- 3103 100 38.3M 100 38.3M 0 0 46.8M 0 --:--:-- --:--:-- --:--:-- 46.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 999 0 --:--:-- --:--:-- --:--:-- 1006 0 0 0 620 0 0 2449 0 --:--:-- --:--:-- --:--:-- 2449 0 10.7M 0 51774 0 0 126k 0 0:01:26 --:--:-- 0:01:26 126k100 10.7M 100 10.7M 0 0 18.6M 0 --:--:-- --:--:-- --:--:-- 60.3M ~/nightlyrpmrdQxbd/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmrdQxbd/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmrdQxbd/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmrdQxbd ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmrdQxbd/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmrdQxbd/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3326216de87c4982bdae40041f00f6e8 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.jYyk5F:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3989929484680900450.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c08c7f98 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 236 | n45.dusty | 172.19.2.109 | dusty | 3549 | Deployed | c08c7f98 | None | None | 7 | x86_64 | 1 | 2440 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed May 1 01:05:43 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 1 May 2019 01:05:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #154 Message-ID: <590764804.2902.1556672743672.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 454.98 KB...] Wednesday 01 May 2019 01:55:40 +0100 (0:00:00.262) 0:17:36.763 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_region_id] *** Wednesday 01 May 2019 01:55:41 +0100 (0:00:00.208) 0:17:36.971 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_tenancy_id] *** Wednesday 01 May 2019 01:55:41 +0100 (0:00:00.202) 0:17:37.174 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_id] *** Wednesday 01 May 2019 01:55:41 +0100 (0:00:00.220) 0:17:37.394 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_user_fingerprint] *** Wednesday 01 May 2019 01:55:41 +0100 (0:00:00.186) 0:17:37.581 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_compartment_id] *** Wednesday 01 May 2019 01:55:41 +0100 (0:00:00.207) 0:17:37.789 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_vnc_id] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.219) 0:17:38.009 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet1_id] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.210) 0:17:38.219 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_subnet2_id] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.199) 0:17:38.418 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Credentials Check | oci_security_list_management] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.175) 0:17:38.594 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Generate Configuration] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.170) 0:17:38.765 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Encode Configuration] *** Wednesday 01 May 2019 01:55:42 +0100 (0:00:00.181) 0:17:38.947 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration To Secret] *** Wednesday 01 May 2019 01:55:43 +0100 (0:00:00.187) 0:17:39.134 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Configuration] *** Wednesday 01 May 2019 01:55:43 +0100 (0:00:00.181) 0:17:39.316 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Download Controller Manifest] *** Wednesday 01 May 2019 01:55:43 +0100 (0:00:00.223) 0:17:39.539 ********* TASK [kubernetes-apps/cloud_controller/oci : OCI Cloud Controller | Apply Controller Manifest] *** Wednesday 01 May 2019 01:55:43 +0100 (0:00:00.211) 0:17:39.751 ********* PLAY [Fetch config] ************************************************************ TASK [Retrieve kubectl config] ************************************************* Wednesday 01 May 2019 01:55:44 +0100 (0:00:00.248) 0:17:40.000 ********* changed: [kube1] PLAY [Copy kube config for vagrant user] *************************************** TASK [Create a directory] ****************************************************** Wednesday 01 May 2019 01:55:45 +0100 (0:00:00.976) 0:17:40.976 ********* changed: [kube1] changed: [kube2] TASK [Copy kube config for vagrant user] *************************************** Wednesday 01 May 2019 01:55:46 +0100 (0:00:01.669) 0:17:42.646 ********* changed: [kube1] changed: [kube2] PLAY [Deploy GCS] ************************************************************** TASK [GCS Pre | Cluster ID | Generate a UUID] ********************************** Wednesday 01 May 2019 01:55:47 +0100 (0:00:01.099) 0:17:43.746 ********* changed: [kube1] TASK [GCS Pre | Cluster ID | Set gcs_gd2_clusterid fact] *********************** Wednesday 01 May 2019 01:55:48 +0100 (0:00:01.006) 0:17:44.752 ********* ok: [kube1] TASK [GCS Pre | Manifests directory | Create a temporary directory] ************ Wednesday 01 May 2019 01:55:49 +0100 (0:00:00.429) 0:17:45.182 ********* changed: [kube1] TASK [GCS Pre | Manifests directory | Set manifests_dir fact] ****************** Wednesday 01 May 2019 01:55:50 +0100 (0:00:01.227) 0:17:46.409 ********* ok: [kube1] TASK [GCS Pre | Manifests | Sync GCS manifests] ******************************** Wednesday 01 May 2019 01:55:50 +0100 (0:00:00.546) 0:17:46.956 ********* changed: [kube1] => (item=gcs-namespace.yml) changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 01 May 2019 01:56:23 +0100 (0:00:32.128) 0:18:19.085 ********* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 01 May 2019 01:56:23 +0100 (0:00:00.234) 0:18:19.320 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 01 May 2019 01:56:23 +0100 (0:00:00.563) 0:18:19.884 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 01 May 2019 01:56:25 +0100 (0:00:02.056) 0:18:21.941 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 01 May 2019 01:56:26 +0100 (0:00:00.520) 0:18:22.462 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 01 May 2019 01:56:28 +0100 (0:00:02.220) 0:18:24.683 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 01 May 2019 01:56:29 +0100 (0:00:00.480) 0:18:25.163 ********* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 01 May 2019 01:56:31 +0100 (0:00:02.165) 0:18:27.329 ********* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 01 May 2019 01:56:32 +0100 (0:00:01.560) 0:18:28.890 ********* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 01 May 2019 01:56:34 +0100 (0:00:01.759) 0:18:30.649 ********* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (49 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (48 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (47 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (46 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (45 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (44 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (43 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (42 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (41 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (40 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (39 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (38 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (37 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (36 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (35 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (34 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (33 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (32 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (31 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (30 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (29 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (28 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (27 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (26 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (25 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (24 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (23 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (22 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (21 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (20 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (19 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (18 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (17 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (16 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (15 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (14 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (13 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (12 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (11 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (10 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (9 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (8 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (7 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (6 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (5 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (4 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (3 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (2 retries left). FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": true, "cmd": ["/usr/local/bin/kubectl", "-ngcs", "-ojsonpath={.status.availableReplicas}", "get", "deployment", "etcd-operator"], "delta": "0:00:00.275239", "end": "2019-05-01 01:05:43.192652", "rc": 0, "start": "2019-05-01 01:05:42.917413", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=399 changed=116 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 01 May 2019 02:05:43 +0100 (0:09:08.567) 0:27:39.216 ********* =============================================================================== GCS | ETCD Operator | Wait for etcd-operator to be available ---------- 548.57s download : container_download | download images for kubeadm config images -- 44.72s kubernetes/master : kubeadm | Initialize first master ------------------ 41.40s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.96s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.61s Install packages ------------------------------------------------------- 33.37s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 32.13s Wait for host to be available ------------------------------------------ 20.85s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.70s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 17.49s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.46s gather facts from all instances ---------------------------------------- 13.85s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.99s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 12.30s etcd : reload etcd ----------------------------------------------------- 11.99s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.43s container-engine/docker : Docker | pause while Docker restarts --------- 10.38s etcd : wait for etcd up ------------------------------------------------- 9.87s download : file_download | Download item -------------------------------- 9.29s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 8.77s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 1 01:22:49 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 1 May 2019 01:22:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #179 In-Reply-To: <1857405723.2832.1556587387674.JavaMail.jenkins@jenkins.ci.centos.org> References: <1857405723.2832.1556587387674.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1812371704.2905.1556673769919.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.58 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 2 00:13:59 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 2 May 2019 00:13:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #351 In-Reply-To: <1723618681.2901.1556669624454.JavaMail.jenkins@jenkins.ci.centos.org> References: <1723618681.2901.1556669624454.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1198044229.2955.1556756039937.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 94 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : python2-distro-1.2.0-1.el7.noarch 14/49 Installing : patch-2.7.1-10.el7_5.x86_64 15/49 Installing : python-backports-1.0-8.el7.x86_64 16/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 17/49 Installing : python-urllib3-1.10.2-5.el7.noarch 18/49 Installing : python-requests-2.6.0-1.el7_1.noarch 19/49 Installing : python-babel-0.9.6-8.el7.noarch 20/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : libmodman-2.0.1-8.el7.x86_64 23/49 Installing : libproxy-0.4.11-11.el7.x86_64 24/49 Installing : python-markupsafe-0.11-10.el7.x86_64 25/49 Installing : python-jinja2-2.7.2-2.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2486 0 --:--:-- --:--:-- --:--:-- 2489 16 8513k 16 1444k 0 0 2276k 0 0:00:03 --:--:-- 0:00:03 2276k100 8513k 100 8513k 0 0 7527k 0 0:00:01 0:00:01 --:--:-- 13.9M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3190 0 --:--:-- --:--:-- --:--:-- 3198 42 38.3M 42 16.4M 0 0 33.1M 0 0:00:01 --:--:-- 0:00:01 33.1M100 38.3M 100 38.3M 0 0 50.8M 0 --:--:-- --:--:-- --:--:-- 84.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1027 0 --:--:-- --:--:-- --:--:-- 1033 0 0 0 620 0 0 2543 0 --:--:-- --:--:-- --:--:-- 2543 100 10.7M 100 10.7M 0 0 16.3M 0 --:--:-- --:--:-- --:--:-- 16.3M ~/nightlyrpmUs2CbV/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmUs2CbV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmUs2CbV/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmUs2CbV ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmUs2CbV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmUs2CbV/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M aa6d95c8f69b4a308df285a8418f9a77 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.zOobqu:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4019742544014092351.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 9aa86804 +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 193 | n2.dusty | 172.19.2.66 | dusty | 3554 | Deployed | 9aa86804 | None | None | 7 | x86_64 | 1 | 2010 | None | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu May 2 01:22:31 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 2 May 2019 01:22:31 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #180 In-Reply-To: <1812371704.2905.1556673769919.JavaMail.jenkins@jenkins.ci.centos.org> References: <1812371704.2905.1556673769919.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1611545614.2958.1556760151053.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.22 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 3 00:15:56 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:15:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #352 In-Reply-To: <1198044229.2955.1556756039937.JavaMail.jenkins@jenkins.ci.centos.org> References: <1198044229.2955.1556756039937.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1801803472.3020.1556842556208.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [github] git clone with just depth 1 for glusterfs (#57) ------------------------------------------ [...truncated 37.38 KB...] Total 60 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : python2-distro-1.2.0-1.el7.noarch 14/49 Installing : patch-2.7.1-10.el7_5.x86_64 15/49 Installing : python-backports-1.0-8.el7.x86_64 16/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 17/49 Installing : python-urllib3-1.10.2-5.el7.noarch 18/49 Installing : python-requests-2.6.0-1.el7_1.noarch 19/49 Installing : python-babel-0.9.6-8.el7.noarch 20/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : libmodman-2.0.1-8.el7.x86_64 23/49 Installing : libproxy-0.4.11-11.el7.x86_64 24/49 Installing : python-markupsafe-0.11-10.el7.x86_64 25/49 Installing : python-jinja2-2.7.2-2.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.14-2.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : apr-util-1.5.2-6.el7.x86_64 29/49 Verifying : python-backports-1.0-8.el7.x86_64 30/49 Verifying : patch-2.7.1-10.el7_5.x86_64 31/49 Verifying : libmpc-1.0.1-3.el7.x86_64 32/49 Verifying : python2-distro-1.2.0-1.el7.noarch 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : mock-1.4.14-2.el7.noarch 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.14-2.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1786 0 --:--:-- --:--:-- --:--:-- 1795 100 8513k 100 8513k 0 0 10.8M 0 --:--:-- --:--:-- --:--:-- 10.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2160 0 --:--:-- --:--:-- --:--:-- 2162 96 38.3M 96 36.8M 0 0 34.2M 0 0:00:01 0:00:01 --:--:-- 34.2M100 38.3M 100 38.3M 0 0 35.3M 0 0:00:01 0:00:01 --:--:-- 168M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 568 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1691 0 --:--:-- --:--:-- --:--:-- 1691 4 10.7M 4 492k 0 0 900k 0 0:00:12 --:--:-- 0:00:12 900k100 10.7M 100 10.7M 0 0 15.9M 0 --:--:-- --:--:-- --:--:-- 81.3M ~/nightlyrpmjdkYs1/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmjdkYs1/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmjdkYs1/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmjdkYs1 ~ INFO: mock.py version 1.4.14 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmjdkYs1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.14 INFO: Mock Version: 1.4.14 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmjdkYs1/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b8350074ea8646eabaeb886aa8aab472 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.GnUs3k:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4635934902634712262.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0c0efaad +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 98 | n34.pufty | 172.19.3.98 | pufty | 3563 | Deployed | 0c0efaad | None | None | 7 | x86_64 | 1 | 2330 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri May 3 00:51:01 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:51:01 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9988 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1361112677.3023.1556844662531.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9988 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/9988/ to view the results. From ci at centos.org Fri May 3 00:51:54 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:51:54 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9989 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1361112677.3023.1556844662531.JavaMail.jenkins@jenkins.ci.centos.org> References: <1361112677.3023.1556844662531.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2040109967.3025.1556844714513.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9989 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9989/ to view the results. From ci at centos.org Fri May 3 00:52:40 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:52:40 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9990 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <2040109967.3025.1556844714513.JavaMail.jenkins@jenkins.ci.centos.org> References: <2040109967.3025.1556844714513.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1505652418.3027.1556844760913.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9990 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9990/ to view the results. From ci at centos.org Fri May 3 00:53:30 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:53:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9991 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1505652418.3027.1556844760913.JavaMail.jenkins@jenkins.ci.centos.org> References: <1505652418.3027.1556844760913.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1364104441.3029.1556844810453.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9991 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9991/ to view the results. From ci at centos.org Fri May 3 00:54:20 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:54:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9992 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1364104441.3029.1556844810453.JavaMail.jenkins@jenkins.ci.centos.org> References: <1364104441.3029.1556844810453.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <861587584.3031.1556844860312.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9992 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9992/ to view the results. From ci at centos.org Fri May 3 00:55:07 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 00:55:07 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9993 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <861587584.3031.1556844860312.JavaMail.jenkins@jenkins.ci.centos.org> References: <861587584.3031.1556844860312.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <15294745.3033.1556844908154.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9993 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9993/ to view the results. From ci at centos.org Fri May 3 01:04:27 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 01:04:27 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #156 In-Reply-To: <2119563175.2957.1556757541049.JavaMail.jenkins@jenkins.ci.centos.org> References: <2119563175.2957.1556757541049.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <174975907.3036.1556845467221.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Fri May 3 01:06:10 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 3 May 2019 01:06:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #181 In-Reply-To: <1611545614.2958.1556760151053.JavaMail.jenkins@jenkins.ci.centos.org> References: <1611545614.2958.1556760151053.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1628787446.3037.1556845570280.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [github] git clone with just depth 1 for glusterfs (#57) ------------------------------------------ [...truncated 55.21 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 4 00:15:52 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:15:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #353 In-Reply-To: <1801803472.3020.1556842556208.JavaMail.jenkins@jenkins.ci.centos.org> References: <1801803472.3020.1556842556208.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <619642054.3096.1556928952508.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add jenkins job to run lcov on gluster-block nightly [amarts] Add jenkins job to run lcov on gluster-block nightly (#58) [dkhandel] Give executable permissions to gluster-block-lcov script [dkhandel] Add epel repo on the duffy machines to get lcov package [dkhandel] Install epel on duffy machines to get lcov package [dkhandel] Add '-y' when installing packages on duffy centos machines ------------------------------------------ [...truncated 37.37 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : python2-distro-1.2.0-1.el7.noarch 14/49 Installing : patch-2.7.1-10.el7_5.x86_64 15/49 Installing : python-backports-1.0-8.el7.x86_64 16/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 17/49 Installing : python-urllib3-1.10.2-5.el7.noarch 18/49 Installing : python-requests-2.6.0-1.el7_1.noarch 19/49 Installing : python-babel-0.9.6-8.el7.noarch 20/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 21/49 Installing : mock-core-configs-30.2-1.el7.noarch 22/49 Installing : libmodman-2.0.1-8.el7.x86_64 23/49 Installing : libproxy-0.4.11-11.el7.x86_64 24/49 Installing : python-markupsafe-0.11-10.el7.x86_64 25/49 Installing : python-jinja2-2.7.2-2.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : python2-distro-1.2.0-1.el7.noarch 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-1.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1828 0 --:--:-- --:--:-- --:--:-- 1833 100 8513k 100 8513k 0 0 12.4M 0 --:--:-- --:--:-- --:--:-- 12.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2012 0 --:--:-- --:--:-- --:--:-- 2009 100 38.3M 100 38.3M 0 0 44.6M 0 --:--:-- --:--:-- --:--:-- 44.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 532 0 --:--:-- --:--:-- --:--:-- 533 0 0 0 620 0 0 1594 0 --:--:-- --:--:-- --:--:-- 1594 100 10.7M 100 10.7M 0 0 15.8M 0 --:--:-- --:--:-- --:--:-- 15.8M ~/nightlyrpmqGIXAg/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmqGIXAg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmqGIXAg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmqGIXAg ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmqGIXAg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmqGIXAg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 6ecf14cd15994cf9bc4aec862b549066 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.8o5V3T:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8378832481091661151.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 8d260e5e +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 168 | n41.crusty | 172.19.2.41 | crusty | 3479 | Deployed | 8d260e5e | None | None | 7 | x86_64 | 1 | 2400 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat May 4 00:51:02 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:51:02 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9996 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1243795065.3099.1556931062498.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9996 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/9996/ to view the results. From ci at centos.org Sat May 4 00:51:54 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:51:54 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9997 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1243795065.3099.1556931062498.JavaMail.jenkins@jenkins.ci.centos.org> References: <1243795065.3099.1556931062498.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <711429134.3101.1556931115120.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9997 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9997/ to view the results. From ci at centos.org Sat May 4 00:52:42 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:52:42 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9998 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <711429134.3101.1556931115120.JavaMail.jenkins@jenkins.ci.centos.org> References: <711429134.3101.1556931115120.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1111134415.3103.1556931162234.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9998 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9998/ to view the results. From ci at centos.org Sat May 4 00:53:30 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:53:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 9999 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1111134415.3103.1556931162234.JavaMail.jenkins@jenkins.ci.centos.org> References: <1111134415.3103.1556931162234.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1884798807.3105.1556931210459.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 9999 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/9999/ to view the results. From ci at centos.org Sat May 4 00:54:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:54:01 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10000 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1884798807.3105.1556931210459.JavaMail.jenkins@jenkins.ci.centos.org> References: <1884798807.3105.1556931210459.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1421220806.3107.1556931241988.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10000 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10000/ to view the results. From ci at centos.org Sat May 4 00:54:52 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:54:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10001 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1421220806.3107.1556931241988.JavaMail.jenkins@jenkins.ci.centos.org> References: <1421220806.3107.1556931241988.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1900717177.3109.1556931292551.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10001 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10001/ to view the results. From ci at centos.org Sat May 4 00:55:30 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 00:55:30 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #157 Message-ID: <283217650.3110.1556931330071.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add jenkins job to run lcov on gluster-block nightly [amarts] Add jenkins job to run lcov on gluster-block nightly (#58) [dkhandel] Give executable permissions to gluster-block-lcov script [dkhandel] Add epel repo on the duffy machines to get lcov package [dkhandel] Install epel on duffy machines to get lcov package [dkhandel] Add '-y' when installing packages on duffy centos machines ------------------------------------------ [...truncated 459.33 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Saturday 04 May 2019 01:45:01 +0100 (0:00:11.774) 0:10:25.205 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Saturday 04 May 2019 01:45:01 +0100 (0:00:00.091) 0:10:25.297 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Saturday 04 May 2019 01:45:01 +0100 (0:00:00.162) 0:10:25.459 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Saturday 04 May 2019 01:45:02 +0100 (0:00:00.731) 0:10:26.191 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Saturday 04 May 2019 01:45:02 +0100 (0:00:00.154) 0:10:26.346 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Saturday 04 May 2019 01:45:03 +0100 (0:00:00.761) 0:10:27.107 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Saturday 04 May 2019 01:45:03 +0100 (0:00:00.138) 0:10:27.245 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Saturday 04 May 2019 01:45:04 +0100 (0:00:00.724) 0:10:27.970 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Saturday 04 May 2019 01:45:04 +0100 (0:00:00.635) 0:10:28.606 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Saturday 04 May 2019 01:45:05 +0100 (0:00:00.669) 0:10:29.276 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Saturday 04 May 2019 01:45:16 +0100 (0:00:10.862) 0:10:40.138 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Saturday 04 May 2019 01:45:17 +0100 (0:00:00.644) 0:10:40.782 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Saturday 04 May 2019 01:45:17 +0100 (0:00:00.456) 0:10:41.238 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Saturday 04 May 2019 01:45:18 +0100 (0:00:00.478) 0:10:41.716 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Saturday 04 May 2019 01:45:18 +0100 (0:00:00.672) 0:10:42.389 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Saturday 04 May 2019 01:45:19 +0100 (0:00:00.857) 0:10:43.246 ********** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Saturday 04 May 2019 01:45:25 +0100 (0:00:05.879) 0:10:49.126 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Saturday 04 May 2019 01:45:25 +0100 (0:00:00.138) 0:10:49.264 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Saturday 04 May 2019 01:46:30 +0100 (0:01:05.094) 0:11:54.359 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Saturday 04 May 2019 01:46:31 +0100 (0:00:00.799) 0:11:55.159 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 04 May 2019 01:46:31 +0100 (0:00:00.113) 0:11:55.272 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Saturday 04 May 2019 01:46:31 +0100 (0:00:00.139) 0:11:55.412 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 04 May 2019 01:46:32 +0100 (0:00:00.703) 0:11:56.115 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 04 May 2019 01:46:32 +0100 (0:00:00.165) 0:11:56.281 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 04 May 2019 01:46:33 +0100 (0:00:00.796) 0:11:57.078 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 04 May 2019 01:46:33 +0100 (0:00:00.184) 0:11:57.262 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 04 May 2019 01:46:34 +0100 (0:00:00.741) 0:11:58.004 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 04 May 2019 01:46:34 +0100 (0:00:00.524) 0:11:58.529 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 04 May 2019 01:46:35 +0100 (0:00:00.181) 0:11:58.710 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.38.133:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Saturday 04 May 2019 01:55:29 +0100 (0:08:54.750) 0:20:53.460 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 534.75s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 65.09s download : container_download | download images for kubeadm config images -- 34.68s kubernetes/master : kubeadm | Initialize first master ------------------ 28.82s Install packages ------------------------------------------------------- 25.87s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.04s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.75s Wait for host to be available ------------------------------------------ 16.53s Extend root VG --------------------------------------------------------- 15.06s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.97s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.69s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.77s etcd : reload etcd ----------------------------------------------------- 11.21s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.86s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.66s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s kubernetes/node : install | Copy hyperkube binary from download dir ----- 9.14s gather facts from all instances ----------------------------------------- 9.13s etcd : wait for etcd up ------------------------------------------------- 8.25s download : file_download | Download item -------------------------------- 7.71s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 4 01:05:52 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 4 May 2019 01:05:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #182 In-Reply-To: <1628787446.3037.1556845570280.JavaMail.jenkins@jenkins.ci.centos.org> References: <1628787446.3037.1556845570280.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <742966099.3111.1556931952544.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add jenkins job to run lcov on gluster-block nightly [amarts] Add jenkins job to run lcov on gluster-block nightly (#58) [dkhandel] Give executable permissions to gluster-block-lcov script [dkhandel] Add epel repo on the duffy machines to get lcov package [dkhandel] Install epel on duffy machines to get lcov package [dkhandel] Add '-y' when installing packages on duffy centos machines ------------------------------------------ [...truncated 55.22 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 5 00:16:02 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:16:02 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #354 In-Reply-To: <619642054.3096.1556928952508.JavaMail.jenkins@jenkins.ci.centos.org> References: <619642054.3096.1556928952508.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <602183311.3148.1557015362161.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-2.el7.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1903 0 --:--:-- --:--:-- --:--:-- 1908 100 8513k 100 8513k 0 0 11.1M 0 --:--:-- --:--:-- --:--:-- 11.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2324 0 --:--:-- --:--:-- --:--:-- 2330 11 38.3M 11 4485k 0 0 9573k 0 0:00:04 --:--:-- 0:00:04 9573k100 38.3M 100 38.3M 0 0 49.5M 0 --:--:-- --:--:-- --:--:-- 111M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 581 0 --:--:-- --:--:-- --:--:-- 581 0 0 0 620 0 0 1753 0 --:--:-- --:--:-- --:--:-- 1753 100 10.7M 100 10.7M 0 0 16.9M 0 --:--:-- --:--:-- --:--:-- 16.9M ~/nightlyrpmS0BbQt/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmS0BbQt/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmS0BbQt/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmS0BbQt ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmS0BbQt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmS0BbQt/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 279a1126b34c4b529b72df9164d4b5f2 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.OwerW_:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins4177599296681592280.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c1a358c0 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 159 | n32.crusty | 172.19.2.32 | crusty | 3492 | Deployed | c1a358c0 | None | None | 7 | x86_64 | 1 | 2310 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun May 5 00:51:04 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:51:04 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10004 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <387955706.3151.1557017464709.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10004 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10004/ to view the results. From ci at centos.org Sun May 5 00:51:54 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:51:54 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10005 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <387955706.3151.1557017464709.JavaMail.jenkins@jenkins.ci.centos.org> References: <387955706.3151.1557017464709.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1074958487.3153.1557017514281.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10005 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10005/ to view the results. From ci at centos.org Sun May 5 00:52:25 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:52:25 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10006 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1074958487.3153.1557017514281.JavaMail.jenkins@jenkins.ci.centos.org> References: <1074958487.3153.1557017514281.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <387506375.3155.1557017545758.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10006 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10006/ to view the results. From ci at centos.org Sun May 5 00:52:55 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:52:55 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10007 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <387506375.3155.1557017545758.JavaMail.jenkins@jenkins.ci.centos.org> References: <387506375.3155.1557017545758.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1047844286.3157.1557017575608.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10007 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10007/ to view the results. From ci at centos.org Sun May 5 00:53:26 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:53:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10008 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1047844286.3157.1557017575608.JavaMail.jenkins@jenkins.ci.centos.org> References: <1047844286.3157.1557017575608.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <236261258.3159.1557017606507.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10008 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10008/ to view the results. From ci at centos.org Sun May 5 00:53:57 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 00:53:57 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10009 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <236261258.3159.1557017606507.JavaMail.jenkins@jenkins.ci.centos.org> References: <236261258.3159.1557017606507.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1075393110.3161.1557017637445.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10009 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10009/ to view the results. From ci at centos.org Sun May 5 01:01:53 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 01:01:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #183 In-Reply-To: <742966099.3111.1556931952544.JavaMail.jenkins@jenkins.ci.centos.org> References: <742966099.3111.1556931952544.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2109013768.3162.1557018113959.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.27 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 5 01:07:19 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 5 May 2019 01:07:19 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #158 In-Reply-To: <283217650.3110.1556931330071.JavaMail.jenkins@jenkins.ci.centos.org> References: <283217650.3110.1556931330071.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2043881915.3163.1557018439228.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.77 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 05 May 2019 01:55:33 +0100 (0:00:34.615) 0:17:58.436 ************ included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 05 May 2019 01:55:33 +0100 (0:00:00.255) 0:17:58.691 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 05 May 2019 01:55:33 +0100 (0:00:00.549) 0:17:59.241 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 05 May 2019 01:55:36 +0100 (0:00:02.040) 0:18:01.282 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 05 May 2019 01:55:36 +0100 (0:00:00.483) 0:18:01.765 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 05 May 2019 01:55:38 +0100 (0:00:02.211) 0:18:03.977 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 05 May 2019 01:55:39 +0100 (0:00:00.431) 0:18:04.408 ************ changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 05 May 2019 01:55:41 +0100 (0:00:02.059) 0:18:06.468 ************ ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 05 May 2019 01:55:42 +0100 (0:00:01.580) 0:18:08.049 ************ ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 05 May 2019 01:55:44 +0100 (0:00:01.733) 0:18:09.782 ************ FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Sunday 05 May 2019 01:55:56 +0100 (0:00:12.115) 0:18:21.898 ************ ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Sunday 05 May 2019 01:55:58 +0100 (0:00:01.611) 0:18:23.510 ************ ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Sunday 05 May 2019 01:55:59 +0100 (0:00:01.560) 0:18:25.071 ************ ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Sunday 05 May 2019 01:56:01 +0100 (0:00:01.421) 0:18:26.493 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Sunday 05 May 2019 01:56:02 +0100 (0:00:01.745) 0:18:28.238 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Sunday 05 May 2019 01:56:05 +0100 (0:00:02.071) 0:18:30.310 ************ changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Sunday 05 May 2019 01:56:06 +0100 (0:00:01.529) 0:18:31.839 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Sunday 05 May 2019 01:56:07 +0100 (0:00:00.474) 0:18:32.314 ************ FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Sunday 05 May 2019 01:57:31 +0100 (0:01:23.973) 0:19:56.288 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Sunday 05 May 2019 01:57:32 +0100 (0:00:01.753) 0:19:58.042 ************ included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 05 May 2019 01:57:33 +0100 (0:00:00.222) 0:19:58.264 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Sunday 05 May 2019 01:57:33 +0100 (0:00:00.450) 0:19:58.715 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 05 May 2019 01:57:35 +0100 (0:00:01.725) 0:20:00.440 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Sunday 05 May 2019 01:57:35 +0100 (0:00:00.303) 0:20:00.743 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 05 May 2019 01:57:37 +0100 (0:00:01.592) 0:20:02.336 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Sunday 05 May 2019 01:57:37 +0100 (0:00:00.335) 0:20:02.672 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Sunday 05 May 2019 01:57:38 +0100 (0:00:01.521) 0:20:04.193 ************ changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Sunday 05 May 2019 01:57:40 +0100 (0:00:01.786) 0:20:05.980 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 05 May 2019 01:57:41 +0100 (0:00:00.343) 0:20:06.323 ************ FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.14.31:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Sunday 05 May 2019 02:07:18 +0100 (0:09:37.741) 0:29:44.064 ************ =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.74s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.97s kubernetes/master : kubeadm | Initialize first master ------------------ 41.30s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.00s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 34.62s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.06s download : container_download | download images for kubeadm config images -- 32.30s Install packages ------------------------------------------------------- 32.01s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.99s Wait for host to be available ------------------------------------------ 20.76s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.94s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.50s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.52s gather facts from all instances ---------------------------------------- 13.46s etcd : Gen_certs | Gather etcd master certs ---------------------------- 12.87s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.12s etcd : reload etcd ----------------------------------------------------- 11.95s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.54s container-engine/docker : Docker | pause while Docker restarts --------- 10.37s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.22s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 6 00:15:56 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:15:56 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #355 In-Reply-To: <602183311.3148.1557015362161.JavaMail.jenkins@jenkins.ci.centos.org> References: <602183311.3148.1557015362161.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <267271747.3196.1557101756775.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 60 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-2.el7.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1580 0 --:--:-- --:--:-- --:--:-- 1583 100 8513k 100 8513k 0 0 11.7M 0 --:--:-- --:--:-- --:--:-- 11.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2210 0 --:--:-- --:--:-- --:--:-- 2207 100 38.3M 100 38.3M 0 0 37.6M 0 0:00:01 0:00:01 --:--:-- 37.6M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 681 0 --:--:-- --:--:-- --:--:-- 683 0 0 0 620 0 0 1974 0 --:--:-- --:--:-- --:--:-- 1974 100 10.7M 100 10.7M 0 0 16.4M 0 --:--:-- --:--:-- --:--:-- 16.4M ~/nightlyrpmKPubno/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmKPubno/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmKPubno/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmKPubno ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmKPubno/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmKPubno/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8c3dafe10d124d67add2d18df39ca4d6 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.VIkIPs:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1786042697142614546.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 5dc7352a +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 172 | n45.crusty | 172.19.2.45 | crusty | 3499 | Deployed | 5dc7352a | None | None | 7 | x86_64 | 1 | 2440 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon May 6 00:51:03 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:51:03 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10012 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <537208092.3198.1557103864444.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10012 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10012/ to view the results. From ci at centos.org Mon May 6 00:51:51 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:51:51 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10013 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <537208092.3198.1557103864444.JavaMail.jenkins@jenkins.ci.centos.org> References: <537208092.3198.1557103864444.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <497083058.3200.1557103911272.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10013 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10013/ to view the results. From ci at centos.org Mon May 6 00:52:25 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:52:25 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10014 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <497083058.3200.1557103911272.JavaMail.jenkins@jenkins.ci.centos.org> References: <497083058.3200.1557103911272.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2026251982.3202.1557103945189.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10014 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10014/ to view the results. From ci at centos.org Mon May 6 00:52:56 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:52:56 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10015 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <2026251982.3202.1557103945189.JavaMail.jenkins@jenkins.ci.centos.org> References: <2026251982.3202.1557103945189.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <390364577.3204.1557103976570.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10015 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10015/ to view the results. From ci at centos.org Mon May 6 00:53:27 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:53:27 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10016 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <390364577.3204.1557103976570.JavaMail.jenkins@jenkins.ci.centos.org> References: <390364577.3204.1557103976570.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <385035603.3206.1557104007416.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10016 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10016/ to view the results. From ci at centos.org Mon May 6 00:53:58 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 00:53:58 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10017 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <385035603.3206.1557104007416.JavaMail.jenkins@jenkins.ci.centos.org> References: <385035603.3206.1557104007416.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1869915143.3208.1557104038988.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10017 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10017/ to view the results. From ci at centos.org Mon May 6 01:01:54 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 01:01:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #184 In-Reply-To: <2109013768.3162.1557018113959.JavaMail.jenkins@jenkins.ci.centos.org> References: <2109013768.3162.1557018113959.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1521732368.3209.1557104515020.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.27 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 6 01:07:49 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 6 May 2019 01:07:49 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #159 In-Reply-To: <2043881915.3163.1557018439228.JavaMail.jenkins@jenkins.ci.centos.org> References: <2043881915.3163.1557018439228.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1229054864.3210.1557104869313.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.54 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Monday 06 May 2019 01:56:04 +0100 (0:00:35.546) 0:18:08.217 ************ included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Monday 06 May 2019 01:56:05 +0100 (0:00:00.326) 0:18:08.543 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Monday 06 May 2019 01:56:05 +0100 (0:00:00.449) 0:18:08.993 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Monday 06 May 2019 01:56:07 +0100 (0:00:02.209) 0:18:11.203 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Monday 06 May 2019 01:56:08 +0100 (0:00:00.432) 0:18:11.635 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Monday 06 May 2019 01:56:10 +0100 (0:00:02.013) 0:18:13.649 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Monday 06 May 2019 01:56:10 +0100 (0:00:00.448) 0:18:14.097 ************ changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Monday 06 May 2019 01:56:12 +0100 (0:00:02.104) 0:18:16.201 ************ ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Monday 06 May 2019 01:56:14 +0100 (0:00:01.602) 0:18:17.803 ************ ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Monday 06 May 2019 01:56:16 +0100 (0:00:01.497) 0:18:19.301 ************ FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Monday 06 May 2019 01:56:28 +0100 (0:00:12.355) 0:18:31.657 ************ ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Monday 06 May 2019 01:56:29 +0100 (0:00:01.478) 0:18:33.136 ************ ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Monday 06 May 2019 01:56:31 +0100 (0:00:01.327) 0:18:34.463 ************ ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Monday 06 May 2019 01:56:32 +0100 (0:00:01.242) 0:18:35.706 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Monday 06 May 2019 01:56:34 +0100 (0:00:01.562) 0:18:37.268 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Monday 06 May 2019 01:56:35 +0100 (0:00:01.872) 0:18:39.141 ************ changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Monday 06 May 2019 01:56:37 +0100 (0:00:01.154) 0:18:40.295 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Monday 06 May 2019 01:56:37 +0100 (0:00:00.378) 0:18:40.674 ************ FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Monday 06 May 2019 01:58:01 +0100 (0:01:23.699) 0:20:04.374 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Monday 06 May 2019 01:58:02 +0100 (0:00:01.629) 0:20:06.004 ************ included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 06 May 2019 01:58:02 +0100 (0:00:00.206) 0:20:06.211 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Monday 06 May 2019 01:58:03 +0100 (0:00:00.347) 0:20:06.558 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 06 May 2019 01:58:04 +0100 (0:00:01.658) 0:20:08.216 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Monday 06 May 2019 01:58:05 +0100 (0:00:00.352) 0:20:08.569 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Monday 06 May 2019 01:58:06 +0100 (0:00:01.565) 0:20:10.134 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Monday 06 May 2019 01:58:07 +0100 (0:00:00.372) 0:20:10.507 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Monday 06 May 2019 01:58:08 +0100 (0:00:01.551) 0:20:12.059 ************ changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Monday 06 May 2019 01:58:09 +0100 (0:00:01.148) 0:20:13.207 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Monday 06 May 2019 01:58:10 +0100 (0:00:00.318) 0:20:13.526 ************ FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.34.237:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Monday 06 May 2019 02:07:48 +0100 (0:09:38.584) 0:29:52.110 ************ =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.58s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.70s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.96s kubernetes/master : kubeadm | Initialize first master ------------------ 38.95s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.55s Install packages ------------------------------------------------------- 32.85s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.82s download : container_download | download images for kubeadm config images -- 32.20s Wait for host to be available ------------------------------------------ 20.90s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.10s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 19.77s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.54s gather facts from all instances ---------------------------------------- 13.64s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.13s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.11s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.36s etcd : reload etcd ----------------------------------------------------- 12.02s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.14s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 10.71s container-engine/docker : Docker | pause while Docker restarts --------- 10.41s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 7 00:15:52 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:15:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #356 In-Reply-To: <267271747.3196.1557101756775.JavaMail.jenkins@jenkins.ci.centos.org> References: <267271747.3196.1557101756775.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <499850385.3264.1557188152221.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add runtime dependencies for running lcov [dkhandel] Source install runtime dependencies: - targetcli-fb - stlib-fb - [dkhandel] Install python-setuptools on duffy machines [dkhandel] Fix the bug when starting glusterd service [dkhandel] Add publisher for generated HTML reports ------------------------------------------ [...truncated 37.36 KB...] Total 64 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-2.el7.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1313 0 --:--:-- --:--:-- --:--:-- 1318 100 8513k 100 8513k 0 0 10.8M 0 --:--:-- --:--:-- --:--:-- 10.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2162 0 --:--:-- --:--:-- --:--:-- 2162 100 38.3M 100 38.3M 0 0 47.1M 0 --:--:-- --:--:-- --:--:-- 47.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 548 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1646 0 --:--:-- --:--:-- --:--:-- 1646 100 10.7M 100 10.7M 0 0 16.3M 0 --:--:-- --:--:-- --:--:-- 16.3M ~/nightlyrpmqoIx5s/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmqoIx5s/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmqoIx5s/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmqoIx5s ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmqoIx5s/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmqoIx5s/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 1f77d2b935534e45bcd37a593fc564ed -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Jg9r29:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8067576942525797032.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 8c9ad9d8 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 168 | n41.crusty | 172.19.2.41 | crusty | 3509 | Deployed | 8c9ad9d8 | None | None | 7 | x86_64 | 1 | 2400 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue May 7 00:50:51 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:50:51 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10020 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1690139227.3266.1557190251949.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10020 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10020/ to view the results. From ci at centos.org Tue May 7 00:51:23 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:51:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10021 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1690139227.3266.1557190251949.JavaMail.jenkins@jenkins.ci.centos.org> References: <1690139227.3266.1557190251949.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2146498335.3268.1557190283639.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10021 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10021/ to view the results. From ci at centos.org Tue May 7 00:51:56 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:51:56 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10022 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <2146498335.3268.1557190283639.JavaMail.jenkins@jenkins.ci.centos.org> References: <2146498335.3268.1557190283639.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1099748121.3270.1557190316795.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10022 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10022/ to view the results. From ci at centos.org Tue May 7 00:52:26 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:52:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10023 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1099748121.3270.1557190316795.JavaMail.jenkins@jenkins.ci.centos.org> References: <1099748121.3270.1557190316795.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1531641906.3272.1557190346992.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10023 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10023/ to view the results. From ci at centos.org Tue May 7 00:53:14 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:53:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10024 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1531641906.3272.1557190346992.JavaMail.jenkins@jenkins.ci.centos.org> References: <1531641906.3272.1557190346992.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <481274104.3274.1557190394300.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10024 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10024/ to view the results. From ci at centos.org Tue May 7 00:54:01 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 00:54:01 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10025 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <481274104.3274.1557190394300.JavaMail.jenkins@jenkins.ci.centos.org> References: <481274104.3274.1557190394300.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1543361121.3276.1557190441270.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10025 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10025/ to view the results. From ci at centos.org Tue May 7 01:05:00 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 01:05:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #185 In-Reply-To: <1521732368.3209.1557104515020.JavaMail.jenkins@jenkins.ci.centos.org> References: <1521732368.3209.1557104515020.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <90583876.3277.1557191100208.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add runtime dependencies for running lcov [dkhandel] Source install runtime dependencies: - targetcli-fb - stlib-fb - [dkhandel] Install python-setuptools on duffy machines [dkhandel] Fix the bug when starting glusterd service [dkhandel] Add publisher for generated HTML reports ------------------------------------------ [...truncated 55.20 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 7 01:07:16 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 7 May 2019 01:07:16 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #160 In-Reply-To: <1229054864.3210.1557104869313.JavaMail.jenkins@jenkins.ci.centos.org> References: <1229054864.3210.1557104869313.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <647475315.3278.1557191236461.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [dkhandel] Add runtime dependencies for running lcov [dkhandel] Source install runtime dependencies: - targetcli-fb - stlib-fb - [dkhandel] Install python-setuptools on duffy machines [dkhandel] Fix the bug when starting glusterd service [dkhandel] Add publisher for generated HTML reports ------------------------------------------ [...truncated 459.75 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Tuesday 07 May 2019 01:55:32 +0100 (0:00:35.606) 0:18:05.646 *********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Tuesday 07 May 2019 01:55:32 +0100 (0:00:00.394) 0:18:06.040 *********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Tuesday 07 May 2019 01:55:33 +0100 (0:00:00.405) 0:18:06.446 *********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Tuesday 07 May 2019 01:55:35 +0100 (0:00:01.990) 0:18:08.436 *********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Tuesday 07 May 2019 01:55:35 +0100 (0:00:00.431) 0:18:08.868 *********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Tuesday 07 May 2019 01:55:37 +0100 (0:00:02.133) 0:18:11.002 *********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Tuesday 07 May 2019 01:55:38 +0100 (0:00:00.427) 0:18:11.430 *********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Tuesday 07 May 2019 01:55:40 +0100 (0:00:02.061) 0:18:13.491 *********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Tuesday 07 May 2019 01:55:41 +0100 (0:00:01.548) 0:18:15.040 *********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Tuesday 07 May 2019 01:55:43 +0100 (0:00:01.664) 0:18:16.704 *********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Tuesday 07 May 2019 01:55:55 +0100 (0:00:12.123) 0:18:28.828 *********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Tuesday 07 May 2019 01:55:57 +0100 (0:00:01.504) 0:18:30.333 *********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Tuesday 07 May 2019 01:55:58 +0100 (0:00:01.280) 0:18:31.613 *********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Tuesday 07 May 2019 01:55:59 +0100 (0:00:01.282) 0:18:32.896 *********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Tuesday 07 May 2019 01:56:01 +0100 (0:00:01.711) 0:18:34.608 *********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Tuesday 07 May 2019 01:56:03 +0100 (0:00:01.828) 0:18:36.436 *********** changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Tuesday 07 May 2019 01:56:04 +0100 (0:00:01.252) 0:18:37.689 *********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Tuesday 07 May 2019 01:56:04 +0100 (0:00:00.358) 0:18:38.047 *********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Tuesday 07 May 2019 01:57:28 +0100 (0:01:23.842) 0:20:01.890 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Tuesday 07 May 2019 01:57:30 +0100 (0:00:01.618) 0:20:03.509 *********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 07 May 2019 01:57:30 +0100 (0:00:00.200) 0:20:03.710 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Tuesday 07 May 2019 01:57:30 +0100 (0:00:00.316) 0:20:04.026 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 07 May 2019 01:57:32 +0100 (0:00:01.527) 0:20:05.554 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Tuesday 07 May 2019 01:57:32 +0100 (0:00:00.335) 0:20:05.890 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Tuesday 07 May 2019 01:57:34 +0100 (0:00:01.940) 0:20:07.831 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Tuesday 07 May 2019 01:57:34 +0100 (0:00:00.326) 0:20:08.158 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Tuesday 07 May 2019 01:57:36 +0100 (0:00:01.575) 0:20:09.733 *********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Tuesday 07 May 2019 01:57:37 +0100 (0:00:01.374) 0:20:11.107 *********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Tuesday 07 May 2019 01:57:38 +0100 (0:00:00.370) 0:20:11.477 *********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.26.132:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Tuesday 07 May 2019 02:07:16 +0100 (0:09:37.772) 0:29:49.251 *********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 577.77s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 83.84s kubernetes/master : kubeadm | Initialize first master ------------------ 39.09s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.92s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.61s download : container_download | download images for kubeadm config images -- 33.19s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.01s Install packages ------------------------------------------------------- 31.79s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.94s Wait for host to be available ------------------------------------------ 20.93s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 19.07s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.84s gather facts from all instances ---------------------------------------- 14.71s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.61s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.03s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.12s etcd : reload etcd ----------------------------------------------------- 12.09s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.54s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.12s container-engine/docker : Docker | pause while Docker restarts --------- 10.42s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 8 00:15:57 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:15:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #357 In-Reply-To: <499850385.3264.1557188152221.JavaMail.jenkins@jenkins.ci.centos.org> References: <499850385.3264.1557188152221.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <982973163.3371.1557274557085.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-2.el7.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1723 0 --:--:-- --:--:-- --:--:-- 1723 100 8513k 100 8513k 0 0 12.3M 0 --:--:-- --:--:-- --:--:-- 12.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1918 0 --:--:-- --:--:-- --:--:-- 1917 100 38.3M 100 38.3M 0 0 43.2M 0 --:--:-- --:--:-- --:--:-- 43.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 569 0 --:--:-- --:--:-- --:--:-- 570 0 0 0 620 0 0 1586 0 --:--:-- --:--:-- --:--:-- 1586 100 10.7M 100 10.7M 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 15.4M ~/nightlyrpmfEMEnu/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmfEMEnu/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmfEMEnu/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmfEMEnu ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmfEMEnu/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmfEMEnu/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 2c821e1d080c4860aa3fd7c21a59edc4 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.8eWmXX:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7141378129182351703.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 1883a20b +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 180 | n53.crusty | 172.19.2.53 | crusty | 3519 | Deployed | 1883a20b | None | None | 7 | x86_64 | 1 | 2520 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed May 8 00:50:49 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:50:49 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10028 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <2105806270.3376.1557276650476.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10028 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10028/ to view the results. From ci at centos.org Wed May 8 00:51:39 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:51:39 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10029 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <2105806270.3376.1557276650476.JavaMail.jenkins@jenkins.ci.centos.org> References: <2105806270.3376.1557276650476.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <836236100.3378.1557276700153.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10029 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10029/ to view the results. From ci at centos.org Wed May 8 00:52:27 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:52:27 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10030 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <836236100.3378.1557276700153.JavaMail.jenkins@jenkins.ci.centos.org> References: <836236100.3378.1557276700153.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1713954994.3380.1557276747229.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10030 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10030/ to view the results. From ci at centos.org Wed May 8 00:53:14 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:53:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10031 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1713954994.3380.1557276747229.JavaMail.jenkins@jenkins.ci.centos.org> References: <1713954994.3380.1557276747229.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1601368684.3382.1557276794902.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10031 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10031/ to view the results. From ci at centos.org Wed May 8 00:54:03 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:54:03 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10032 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1601368684.3382.1557276794902.JavaMail.jenkins@jenkins.ci.centos.org> References: <1601368684.3382.1557276794902.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <263474323.3384.1557276843793.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10032 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10032/ to view the results. From ci at centos.org Wed May 8 00:54:52 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:54:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10033 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <263474323.3384.1557276843793.JavaMail.jenkins@jenkins.ci.centos.org> References: <263474323.3384.1557276843793.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1676695882.3386.1557276892381.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10033 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10033/ to view the results. From ci at centos.org Wed May 8 00:55:27 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 00:55:27 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #161 In-Reply-To: <647475315.3278.1557191236461.JavaMail.jenkins@jenkins.ci.centos.org> References: <647475315.3278.1557191236461.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1031820273.3387.1557276927045.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Wed May 8 01:05:54 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 8 May 2019 01:05:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #186 In-Reply-To: <90583876.3277.1557191100208.JavaMail.jenkins@jenkins.ci.centos.org> References: <90583876.3277.1557191100208.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <287627876.3388.1557277554579.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.21 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 9 00:15:48 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:15:48 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #358 In-Reply-To: <982973163.3371.1557274557085.JavaMail.jenkins@jenkins.ci.centos.org> References: <982973163.3371.1557274557085.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1362125575.3521.1557360948547.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-2.el7.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.4.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.4.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : python-jinja2-2.7.2-2.el7.noarch 2/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 3/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 5/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : glibc-headers-2.17-260.el7_6.4.x86_64 14/49 Verifying : perl-srpm-macros-1-8.el7.noarch 15/49 Verifying : golang-1.11.5-1.el7.x86_64 16/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 17/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : glibc-devel-2.17-260.el7_6.4.x86_64 39/49 Verifying : mock-core-configs-30.2-1.el7.noarch 40/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 41/49 Verifying : bzip2-1.0.6-13.el7.x86_64 42/49 Verifying : subversion-1.7.14-14.el7.x86_64 43/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.4 glibc-headers.x86_64 0:2.17-260.el7_6.4 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1892 0 --:--:-- --:--:-- --:--:-- 1896 100 8513k 100 8513k 0 0 15.8M 0 --:--:-- --:--:-- --:--:-- 15.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2145 0 --:--:-- --:--:-- --:--:-- 2147 83 38.3M 83 32.1M 0 0 42.4M 0 --:--:-- --:--:-- --:--:-- 42.4M100 38.3M 100 38.3M 0 0 47.1M 0 --:--:-- --:--:-- --:--:-- 109M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 548 0 --:--:-- --:--:-- --:--:-- 550 0 0 0 620 0 0 1495 0 --:--:-- --:--:-- --:--:-- 1495 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 80.0M ~/nightlyrpmsixBvx/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmsixBvx/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmsixBvx/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmsixBvx ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmsixBvx/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmsixBvx/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 30bcb9869d1242a49d7b234b5acfc6b2 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.6Wkwu1:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2844068177974451002.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d28ca093 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 160 | n33.crusty | 172.19.2.33 | crusty | 3528 | Deployed | d28ca093 | None | None | 7 | x86_64 | 1 | 2320 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu May 9 00:50:47 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:50:47 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10036 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1333813237.3527.1557363048143.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10036 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10036/ to view the results. From ci at centos.org Thu May 9 00:51:38 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:51:38 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10037 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1333813237.3527.1557363048143.JavaMail.jenkins@jenkins.ci.centos.org> References: <1333813237.3527.1557363048143.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <790793391.3529.1557363099013.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10037 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10037/ to view the results. From ci at centos.org Thu May 9 00:52:10 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:52:10 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10038 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <790793391.3529.1557363099013.JavaMail.jenkins@jenkins.ci.centos.org> References: <790793391.3529.1557363099013.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <769747041.3531.1557363130258.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10038 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10038/ to view the results. From ci at centos.org Thu May 9 00:52:41 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:52:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10039 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <769747041.3531.1557363130258.JavaMail.jenkins@jenkins.ci.centos.org> References: <769747041.3531.1557363130258.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <175965464.3533.1557363161262.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10039 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10039/ to view the results. From ci at centos.org Thu May 9 00:53:12 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:53:12 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10040 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <175965464.3533.1557363161262.JavaMail.jenkins@jenkins.ci.centos.org> References: <175965464.3533.1557363161262.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1010469013.3535.1557363192897.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10040 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10040/ to view the results. From ci at centos.org Thu May 9 00:53:43 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:53:43 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10041 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1010469013.3535.1557363192897.JavaMail.jenkins@jenkins.ci.centos.org> References: <1010469013.3535.1557363192897.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2134723369.3537.1557363223550.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10041 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10041/ to view the results. From ci at centos.org Thu May 9 00:55:46 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 00:55:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #162 Message-ID: <428155600.3538.1557363346393.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.66 KB...] changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Thursday 09 May 2019 01:44:57 +0100 (0:00:11.858) 0:10:17.718 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Thursday 09 May 2019 01:44:57 +0100 (0:00:00.090) 0:10:17.809 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Thursday 09 May 2019 01:44:58 +0100 (0:00:00.135) 0:10:17.944 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Thursday 09 May 2019 01:44:58 +0100 (0:00:00.717) 0:10:18.662 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Thursday 09 May 2019 01:44:58 +0100 (0:00:00.137) 0:10:18.800 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Thursday 09 May 2019 01:44:59 +0100 (0:00:00.715) 0:10:19.516 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Thursday 09 May 2019 01:44:59 +0100 (0:00:00.133) 0:10:19.649 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Thursday 09 May 2019 01:45:00 +0100 (0:00:00.711) 0:10:20.360 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Thursday 09 May 2019 01:45:01 +0100 (0:00:00.670) 0:10:21.031 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Thursday 09 May 2019 01:45:01 +0100 (0:00:00.713) 0:10:21.744 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Thursday 09 May 2019 01:45:12 +0100 (0:00:10.877) 0:10:32.622 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Thursday 09 May 2019 01:45:13 +0100 (0:00:00.631) 0:10:33.253 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Thursday 09 May 2019 01:45:13 +0100 (0:00:00.469) 0:10:33.723 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Thursday 09 May 2019 01:45:14 +0100 (0:00:00.453) 0:10:34.176 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Thursday 09 May 2019 01:45:15 +0100 (0:00:00.697) 0:10:34.873 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Thursday 09 May 2019 01:45:15 +0100 (0:00:00.876) 0:10:35.750 ********** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Thursday 09 May 2019 01:45:21 +0100 (0:00:05.858) 0:10:41.609 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Thursday 09 May 2019 01:45:21 +0100 (0:00:00.146) 0:10:41.755 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (43 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Thursday 09 May 2019 01:46:49 +0100 (0:01:27.275) 0:12:09.030 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Thursday 09 May 2019 01:46:49 +0100 (0:00:00.763) 0:12:09.794 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 09 May 2019 01:46:50 +0100 (0:00:00.100) 0:12:09.895 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Thursday 09 May 2019 01:46:50 +0100 (0:00:00.137) 0:12:10.032 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 09 May 2019 01:46:51 +0100 (0:00:01.118) 0:12:11.150 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Thursday 09 May 2019 01:46:51 +0100 (0:00:00.150) 0:12:11.300 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Thursday 09 May 2019 01:46:52 +0100 (0:00:00.782) 0:12:12.083 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Thursday 09 May 2019 01:46:52 +0100 (0:00:00.166) 0:12:12.250 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Thursday 09 May 2019 01:46:53 +0100 (0:00:00.740) 0:12:12.990 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Thursday 09 May 2019 01:46:53 +0100 (0:00:00.546) 0:12:13.536 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Thursday 09 May 2019 01:46:53 +0100 (0:00:00.219) 0:12:13.756 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.32.64:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Thursday 09 May 2019 01:55:46 +0100 (0:08:52.264) 0:21:06.020 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 532.26s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 87.28s download : container_download | download images for kubeadm config images -- 38.48s kubernetes/master : kubeadm | Initialize first master ------------------ 27.54s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.30s Install packages ------------------------------------------------------- 24.49s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.68s Wait for host to be available ------------------------------------------ 16.52s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.42s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.14s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.86s Extend root VG --------------------------------------------------------- 11.76s etcd : reload etcd ----------------------------------------------------- 10.95s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.88s kubernetes/node : install | Copy hyperkube binary from download dir ---- 10.32s container-engine/docker : Docker | pause while Docker restarts --------- 10.27s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 10.13s download : file_download | Download item -------------------------------- 9.03s gather facts from all instances ----------------------------------------- 8.43s etcd : wait for etcd up ------------------------------------------------- 7.74s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 9 01:01:44 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 9 May 2019 01:01:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #187 In-Reply-To: <287627876.3388.1557277554579.JavaMail.jenkins@jenkins.ci.centos.org> References: <287627876.3388.1557277554579.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1124495381.3540.1557363704097.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.21 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 10 00:15:55 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:15:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #359 In-Reply-To: <1362125575.3521.1557360948547.JavaMail.jenkins@jenkins.ci.centos.org> References: <1362125575.3521.1557360948547.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <651151862.3678.1557447355596.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1966 0 --:--:-- --:--:-- --:--:-- 1977 100 8513k 100 8513k 0 0 15.4M 0 --:--:-- --:--:-- --:--:-- 15.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2450 0 --:--:-- --:--:-- --:--:-- 2458 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 48.5M 0 --:--:-- --:--:-- --:--:-- 96.7M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 559 0 --:--:-- --:--:-- --:--:-- 558 0 0 0 620 0 0 1710 0 --:--:-- --:--:-- --:--:-- 1710 100 10.7M 100 10.7M 0 0 16.3M 0 --:--:-- --:--:-- --:--:-- 16.3M ~/nightlyrpm7PzIDy/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm7PzIDy/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm7PzIDy/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm7PzIDy ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm7PzIDy/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm7PzIDy/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ee068b568dbd4ed7b4363b4299076a30 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.LnN4nx:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8960863023396055275.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 1f6fa6af +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 186 | n59.crusty | 172.19.2.59 | crusty | 3537 | Deployed | 1f6fa6af | None | None | 7 | x86_64 | 1 | 2580 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri May 10 00:51:09 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:51:09 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10044 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1528664448.3683.1557449469568.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10044 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10044/ to view the results. From ci at centos.org Fri May 10 00:51:56 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:51:56 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10045 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1528664448.3683.1557449469568.JavaMail.jenkins@jenkins.ci.centos.org> References: <1528664448.3683.1557449469568.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1443003120.3685.1557449517098.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10045 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10045/ to view the results. From ci at centos.org Fri May 10 00:52:44 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:52:44 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10046 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1443003120.3685.1557449517098.JavaMail.jenkins@jenkins.ci.centos.org> References: <1443003120.3685.1557449517098.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <145135879.3687.1557449564546.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10046 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10046/ to view the results. From ci at centos.org Fri May 10 00:53:31 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:53:31 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10047 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <145135879.3687.1557449564546.JavaMail.jenkins@jenkins.ci.centos.org> References: <145135879.3687.1557449564546.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <6961408.3689.1557449611954.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10047 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10047/ to view the results. From ci at centos.org Fri May 10 00:54:18 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:54:18 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10048 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <6961408.3689.1557449611954.JavaMail.jenkins@jenkins.ci.centos.org> References: <6961408.3689.1557449611954.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1265400982.3691.1557449658729.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10048 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10048/ to view the results. From ci at centos.org Fri May 10 00:55:07 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 00:55:07 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10049 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1265400982.3691.1557449658729.JavaMail.jenkins@jenkins.ci.centos.org> References: <1265400982.3691.1557449658729.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1619138541.3693.1557449708037.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10049 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10049/ to view the results. From ci at centos.org Fri May 10 01:03:03 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 01:03:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #188 In-Reply-To: <1124495381.3540.1557363704097.JavaMail.jenkins@jenkins.ci.centos.org> References: <1124495381.3540.1557363704097.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1428400448.3695.1557450183920.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 10 01:07:53 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 10 May 2019 01:07:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #163 In-Reply-To: <428155600.3538.1557363346393.JavaMail.jenkins@jenkins.ci.centos.org> References: <428155600.3538.1557363346393.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1981799147.3696.1557450473913.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.57 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Friday 10 May 2019 01:56:09 +0100 (0:00:35.596) 0:18:27.592 ************ included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Friday 10 May 2019 01:56:09 +0100 (0:00:00.263) 0:18:27.856 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Friday 10 May 2019 01:56:10 +0100 (0:00:00.535) 0:18:28.392 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Friday 10 May 2019 01:56:12 +0100 (0:00:02.063) 0:18:30.455 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Friday 10 May 2019 01:56:12 +0100 (0:00:00.446) 0:18:30.902 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Friday 10 May 2019 01:56:14 +0100 (0:00:02.091) 0:18:32.994 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Friday 10 May 2019 01:56:15 +0100 (0:00:00.414) 0:18:33.409 ************ changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Friday 10 May 2019 01:56:17 +0100 (0:00:02.060) 0:18:35.470 ************ ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Friday 10 May 2019 01:56:18 +0100 (0:00:01.626) 0:18:37.097 ************ ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Friday 10 May 2019 01:56:20 +0100 (0:00:01.597) 0:18:38.694 ************ FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Friday 10 May 2019 01:56:32 +0100 (0:00:12.103) 0:18:50.798 ************ ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Friday 10 May 2019 01:56:34 +0100 (0:00:01.577) 0:18:52.376 ************ ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Friday 10 May 2019 01:56:35 +0100 (0:00:01.246) 0:18:53.623 ************ ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Friday 10 May 2019 01:56:36 +0100 (0:00:01.314) 0:18:54.938 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Friday 10 May 2019 01:56:38 +0100 (0:00:01.725) 0:18:56.663 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Friday 10 May 2019 01:56:40 +0100 (0:00:01.856) 0:18:58.519 ************ changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Friday 10 May 2019 01:56:41 +0100 (0:00:01.334) 0:18:59.854 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Friday 10 May 2019 01:56:41 +0100 (0:00:00.355) 0:19:00.210 ************ FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Friday 10 May 2019 01:58:06 +0100 (0:01:24.155) 0:20:24.365 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Friday 10 May 2019 01:58:07 +0100 (0:00:01.609) 0:20:25.974 ************ included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 10 May 2019 01:58:07 +0100 (0:00:00.208) 0:20:26.183 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Friday 10 May 2019 01:58:08 +0100 (0:00:00.338) 0:20:26.521 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 10 May 2019 01:58:09 +0100 (0:00:01.641) 0:20:28.162 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Friday 10 May 2019 01:58:10 +0100 (0:00:00.365) 0:20:28.528 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Friday 10 May 2019 01:58:11 +0100 (0:00:01.606) 0:20:30.135 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Friday 10 May 2019 01:58:12 +0100 (0:00:00.406) 0:20:30.541 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Friday 10 May 2019 01:58:13 +0100 (0:00:01.511) 0:20:32.053 ************ changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Friday 10 May 2019 01:58:14 +0100 (0:00:01.254) 0:20:33.308 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Friday 10 May 2019 01:58:15 +0100 (0:00:00.333) 0:20:33.642 ************ FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.58.249:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Friday 10 May 2019 02:07:53 +0100 (0:09:38.196) 0:30:11.838 ************ =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 578.20s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.16s download : container_download | download images for kubeadm config images -- 49.38s kubernetes/master : kubeadm | Init other uninitialized masters --------- 39.38s kubernetes/master : kubeadm | Initialize first master ------------------ 38.32s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.60s Install packages ------------------------------------------------------- 34.29s etcd : Gen_certs | Write etcd master certs ----------------------------- 33.19s Wait for host to be available ------------------------------------------ 32.05s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.88s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.88s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.36s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.66s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.05s gather facts from all instances ---------------------------------------- 12.92s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.10s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.74s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.43s container-engine/docker : Docker | pause while Docker restarts --------- 10.36s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.28s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 11 00:15:57 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:15:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #360 In-Reply-To: <651151862.3678.1557447355596.JavaMail.jenkins@jenkins.ci.centos.org> References: <651151862.3678.1557447355596.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1903912367.3792.1557533757835.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.39 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1805 0 --:--:-- --:--:-- --:--:-- 1811 54 8513k 54 4640k 0 0 6137k 0 0:00:01 --:--:-- 0:00:01 6137k100 8513k 100 8513k 0 0 10.0M 0 --:--:-- --:--:-- --:--:-- 55.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1977 0 --:--:-- --:--:-- --:--:-- 1984 100 38.3M 100 38.3M 0 0 44.0M 0 --:--:-- --:--:-- --:--:-- 44.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 599 0 --:--:-- --:--:-- --:--:-- 602 0 0 0 620 0 0 1668 0 --:--:-- --:--:-- --:--:-- 1668 2 10.7M 2 283k 0 0 474k 0 0:00:23 --:--:-- 0:00:23 474k100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 76.8M ~/nightlyrpmTTZnST/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmTTZnST/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmTTZnST/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmTTZnST ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmTTZnST/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmTTZnST/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 9f9b5b9c4d7e47bab8bd939ce1585e89 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.1_zbjS:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2967333504763766184.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done b63b7cca +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 131 | n4.crusty | 172.19.2.4 | crusty | 3546 | Deployed | b63b7cca | None | None | 7 | x86_64 | 1 | 2030 | None | +---------+-----------+------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat May 11 00:51:01 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:51:01 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10052 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <307582258.3797.1557535861238.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10052 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10052/ to view the results. From ci at centos.org Sat May 11 00:51:49 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:51:49 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10053 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <307582258.3797.1557535861238.JavaMail.jenkins@jenkins.ci.centos.org> References: <307582258.3797.1557535861238.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <498259617.3799.1557535909604.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10053 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10053/ to view the results. From ci at centos.org Sat May 11 00:52:20 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:52:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10054 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <498259617.3799.1557535909604.JavaMail.jenkins@jenkins.ci.centos.org> References: <498259617.3799.1557535909604.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1267819512.3801.1557535940290.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10054 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10054/ to view the results. From ci at centos.org Sat May 11 00:52:54 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:52:54 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10055 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1267819512.3801.1557535940290.JavaMail.jenkins@jenkins.ci.centos.org> References: <1267819512.3801.1557535940290.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <956430064.3804.1557535974309.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10055 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10055/ to view the results. From ci at centos.org Sat May 11 00:53:41 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:53:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10056 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <956430064.3804.1557535974309.JavaMail.jenkins@jenkins.ci.centos.org> References: <956430064.3804.1557535974309.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <4879496.3806.1557536021270.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10056 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10056/ to view the results. From ci at centos.org Sat May 11 00:54:27 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:54:27 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10057 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <4879496.3806.1557536021270.JavaMail.jenkins@jenkins.ci.centos.org> References: <4879496.3806.1557536021270.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1527444948.3808.1557536067828.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10057 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10057/ to view the results. From ci at centos.org Sat May 11 00:55:27 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 00:55:27 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #164 In-Reply-To: <1981799147.3696.1557450473913.JavaMail.jenkins@jenkins.ci.centos.org> References: <1981799147.3696.1557450473913.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2113035517.3809.1557536127609.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.27 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Saturday 11 May 2019 01:44:58 +0100 (0:00:11.896) 0:10:19.132 ********** included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Saturday 11 May 2019 01:44:58 +0100 (0:00:00.096) 0:10:19.229 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Saturday 11 May 2019 01:44:58 +0100 (0:00:00.137) 0:10:19.367 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Saturday 11 May 2019 01:44:59 +0100 (0:00:00.751) 0:10:20.118 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Saturday 11 May 2019 01:44:59 +0100 (0:00:00.136) 0:10:20.255 ********** changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Saturday 11 May 2019 01:45:00 +0100 (0:00:00.716) 0:10:20.971 ********** ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Saturday 11 May 2019 01:45:00 +0100 (0:00:00.151) 0:10:21.122 ********** changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Saturday 11 May 2019 01:45:01 +0100 (0:00:00.736) 0:10:21.859 ********** ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Saturday 11 May 2019 01:45:01 +0100 (0:00:00.628) 0:10:22.487 ********** ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Saturday 11 May 2019 01:45:02 +0100 (0:00:00.690) 0:10:23.178 ********** FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Saturday 11 May 2019 01:45:13 +0100 (0:00:10.863) 0:10:34.042 ********** ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Saturday 11 May 2019 01:45:13 +0100 (0:00:00.660) 0:10:34.702 ********** ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Saturday 11 May 2019 01:45:14 +0100 (0:00:00.498) 0:10:35.200 ********** ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Saturday 11 May 2019 01:45:14 +0100 (0:00:00.488) 0:10:35.689 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Saturday 11 May 2019 01:45:15 +0100 (0:00:00.697) 0:10:36.386 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Saturday 11 May 2019 01:45:16 +0100 (0:00:00.851) 0:10:37.237 ********** FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Saturday 11 May 2019 01:45:22 +0100 (0:00:05.838) 0:10:43.076 ********** ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Saturday 11 May 2019 01:45:22 +0100 (0:00:00.141) 0:10:43.218 ********** FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Saturday 11 May 2019 01:46:28 +0100 (0:01:06.205) 0:11:49.424 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Saturday 11 May 2019 01:46:29 +0100 (0:00:00.761) 0:11:50.185 ********** included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 11 May 2019 01:46:29 +0100 (0:00:00.104) 0:11:50.290 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Saturday 11 May 2019 01:46:29 +0100 (0:00:00.179) 0:11:50.470 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 11 May 2019 01:46:30 +0100 (0:00:00.706) 0:11:51.176 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Saturday 11 May 2019 01:46:30 +0100 (0:00:00.148) 0:11:51.325 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Saturday 11 May 2019 01:46:31 +0100 (0:00:00.742) 0:11:52.068 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Saturday 11 May 2019 01:46:31 +0100 (0:00:00.161) 0:11:52.229 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Saturday 11 May 2019 01:46:32 +0100 (0:00:00.693) 0:11:52.922 ********** changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Saturday 11 May 2019 01:46:32 +0100 (0:00:00.559) 0:11:53.482 ********** ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Saturday 11 May 2019 01:46:32 +0100 (0:00:00.146) 0:11:53.629 ********** FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.38.193:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Saturday 11 May 2019 01:55:27 +0100 (0:08:54.516) 0:20:48.145 ********** =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 534.52s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.21s download : container_download | download images for kubeadm config images -- 36.05s kubernetes/master : kubeadm | Initialize first master ------------------ 29.68s kubernetes/master : kubeadm | Init other uninitialized masters --------- 24.93s Install packages ------------------------------------------------------- 23.81s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.50s Wait for host to be available ------------------------------------------ 16.36s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 14.23s etcd : Gen_certs | Write etcd master certs ----------------------------- 12.92s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 12.42s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.90s Extend root VG --------------------------------------------------------- 11.39s etcd : reload etcd ----------------------------------------------------- 11.14s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.86s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s gather facts from all instances ----------------------------------------- 9.37s download : file_download | Download item -------------------------------- 7.91s etcd : wait for etcd up ------------------------------------------------- 7.34s kubernetes/master : kubeadm | write out kubeadm certs ------------------- 7.24s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 11 01:05:30 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 11 May 2019 01:05:30 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #189 In-Reply-To: <1428400448.3695.1557450183920.JavaMail.jenkins@jenkins.ci.centos.org> References: <1428400448.3695.1557450183920.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1820611020.3810.1557536731012.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.25 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 12 00:15:57 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:15:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #361 In-Reply-To: <1903912367.3792.1557533757835.JavaMail.jenkins@jenkins.ci.centos.org> References: <1903912367.3792.1557533757835.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <925609287.3866.1557620157874.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 72 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1970 0 --:--:-- --:--:-- --:--:-- 1983 100 8513k 100 8513k 0 0 14.8M 0 --:--:-- --:--:-- --:--:-- 14.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1965 0 --:--:-- --:--:-- --:--:-- 1971 100 38.3M 100 38.3M 0 0 42.4M 0 --:--:-- --:--:-- --:--:-- 42.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 408 0 --:--:-- --:--:-- --:--:-- 409 0 0 0 620 0 0 1386 0 --:--:-- --:--:-- --:--:-- 1386 21 10.7M 21 2320k 0 0 3745k 0 0:00:02 --:--:-- 0:00:02 3745k100 10.7M 100 10.7M 0 0 15.1M 0 --:--:-- --:--:-- --:--:-- 96.2M ~/nightlyrpmj8K8PB/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmj8K8PB/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmj8K8PB/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmj8K8PB ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmj8K8PB/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmj8K8PB/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 22 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 4a8450eea1c0497fb3a819701f837206 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.IatLTr:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8681471081867021294.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 302252eb +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 261 | n6.gusty | 172.19.2.134 | gusty | 3555 | Deployed | 302252eb | None | None | 7 | x86_64 | 1 | 2050 | None | +---------+----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun May 12 00:50:47 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:50:47 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10060 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1770386798.3870.1557622247966.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10060 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10060/ to view the results. From ci at centos.org Sun May 12 00:51:22 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:51:22 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10061 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <1770386798.3870.1557622247966.JavaMail.jenkins@jenkins.ci.centos.org> References: <1770386798.3870.1557622247966.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1308176774.3872.1557622282325.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10061 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10061/ to view the results. From ci at centos.org Sun May 12 00:52:09 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:52:09 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10062 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1308176774.3872.1557622282325.JavaMail.jenkins@jenkins.ci.centos.org> References: <1308176774.3872.1557622282325.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1530093371.3874.1557622330147.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10062 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10062/ to view the results. From ci at centos.org Sun May 12 00:52:57 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:52:57 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10063 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1530093371.3874.1557622330147.JavaMail.jenkins@jenkins.ci.centos.org> References: <1530093371.3874.1557622330147.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1601433715.3876.1557622377542.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10063 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10063/ to view the results. From ci at centos.org Sun May 12 00:53:43 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:53:43 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10064 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1601433715.3876.1557622377542.JavaMail.jenkins@jenkins.ci.centos.org> References: <1601433715.3876.1557622377542.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <913514813.3878.1557622424108.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10064 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10064/ to view the results. From ci at centos.org Sun May 12 00:54:30 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:54:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10065 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <913514813.3878.1557622424108.JavaMail.jenkins@jenkins.ci.centos.org> References: <913514813.3878.1557622424108.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <233582466.3880.1557622470935.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10065 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10065/ to view the results. From ci at centos.org Sun May 12 00:55:32 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 00:55:32 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #165 In-Reply-To: <2113035517.3809.1557536127609.JavaMail.jenkins@jenkins.ci.centos.org> References: <2113035517.3809.1557536127609.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <50586485.3881.1557622532892.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 459.41 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Sunday 12 May 2019 01:45:01 +0100 (0:00:11.895) 0:10:17.037 ************ included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Sunday 12 May 2019 01:45:01 +0100 (0:00:00.086) 0:10:17.123 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Sunday 12 May 2019 01:45:01 +0100 (0:00:00.147) 0:10:17.271 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Sunday 12 May 2019 01:45:02 +0100 (0:00:00.732) 0:10:18.003 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Sunday 12 May 2019 01:45:02 +0100 (0:00:00.209) 0:10:18.213 ************ changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Sunday 12 May 2019 01:45:03 +0100 (0:00:00.802) 0:10:19.015 ************ ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Sunday 12 May 2019 01:45:03 +0100 (0:00:00.140) 0:10:19.156 ************ changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Sunday 12 May 2019 01:45:04 +0100 (0:00:00.763) 0:10:19.920 ************ ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Sunday 12 May 2019 01:45:04 +0100 (0:00:00.696) 0:10:20.616 ************ ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Sunday 12 May 2019 01:45:05 +0100 (0:00:00.717) 0:10:21.334 ************ FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Sunday 12 May 2019 01:45:16 +0100 (0:00:10.834) 0:10:32.168 ************ ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Sunday 12 May 2019 01:45:17 +0100 (0:00:00.716) 0:10:32.885 ************ ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Sunday 12 May 2019 01:45:17 +0100 (0:00:00.537) 0:10:33.423 ************ ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Sunday 12 May 2019 01:45:18 +0100 (0:00:00.538) 0:10:33.962 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Sunday 12 May 2019 01:45:18 +0100 (0:00:00.767) 0:10:34.729 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Sunday 12 May 2019 01:45:19 +0100 (0:00:00.940) 0:10:35.670 ************ FAILED - RETRYING: GCS | ETCD Cluster | Get etcd-client service (5 retries left). changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Sunday 12 May 2019 01:45:25 +0100 (0:00:05.909) 0:10:41.580 ************ ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Sunday 12 May 2019 01:45:25 +0100 (0:00:00.235) 0:10:41.816 ************ FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Sunday 12 May 2019 01:46:32 +0100 (0:01:06.220) 0:11:48.037 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Sunday 12 May 2019 01:46:33 +0100 (0:00:00.838) 0:11:48.875 ************ included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 12 May 2019 01:46:33 +0100 (0:00:00.103) 0:11:48.979 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Sunday 12 May 2019 01:46:33 +0100 (0:00:00.219) 0:11:49.199 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 12 May 2019 01:46:34 +0100 (0:00:00.808) 0:11:50.007 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Sunday 12 May 2019 01:46:34 +0100 (0:00:00.237) 0:11:50.245 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Sunday 12 May 2019 01:46:35 +0100 (0:00:01.047) 0:11:51.292 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Sunday 12 May 2019 01:46:35 +0100 (0:00:00.216) 0:11:51.508 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Sunday 12 May 2019 01:46:36 +0100 (0:00:00.790) 0:11:52.299 ************ changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Sunday 12 May 2019 01:46:37 +0100 (0:00:00.611) 0:11:52.910 ************ ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Sunday 12 May 2019 01:46:37 +0100 (0:00:00.233) 0:11:53.144 ************ FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.21.17:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=420 changed=119 unreachable=0 failed=1 kube2 : ok=321 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Sunday 12 May 2019 01:55:32 +0100 (0:08:55.369) 0:20:48.514 ************ =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 535.37s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 66.22s download : container_download | download images for kubeadm config images -- 38.98s kubernetes/master : kubeadm | Initialize first master ------------------ 26.65s kubernetes/master : kubeadm | Init other uninitialized masters --------- 25.79s Install packages ------------------------------------------------------- 23.88s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 17.85s Wait for host to be available ------------------------------------------ 16.43s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 13.80s etcd : Gen_certs | Write etcd master certs ----------------------------- 13.11s Extend root VG --------------------------------------------------------- 12.39s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 11.90s etcd : reload etcd ----------------------------------------------------- 10.98s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 10.83s kubernetes/node : install | Copy hyperkube binary from download dir ---- 10.46s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.02s gather facts from all instances ----------------------------------------- 8.20s etcd : wait for etcd up ------------------------------------------------- 7.51s download : file_download | Download item -------------------------------- 7.50s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 12 01:05:40 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 12 May 2019 01:05:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #190 In-Reply-To: <1820611020.3810.1557536731012.JavaMail.jenkins@jenkins.ci.centos.org> References: <1820611020.3810.1557536731012.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <719469240.3882.1557623140122.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.46 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 13 00:15:55 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:15:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #362 In-Reply-To: <925609287.3866.1557620157874.JavaMail.jenkins@jenkins.ci.centos.org> References: <925609287.3866.1557620157874.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <122405230.15.1557706555429.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.41 KB...] Total 62 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1835 0 --:--:-- --:--:-- --:--:-- 1838 100 8513k 100 8513k 0 0 12.7M 0 --:--:-- --:--:-- --:--:-- 12.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2329 0 --:--:-- --:--:-- --:--:-- 2339 8 38.3M 8 3172k 0 0 6551k 0 0:00:06 --:--:-- 0:00:06 6551k100 38.3M 100 38.3M 0 0 45.8M 0 --:--:-- --:--:-- --:--:-- 100M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 556 0 --:--:-- --:--:-- --:--:-- 556 0 0 0 620 0 0 1615 0 --:--:-- --:--:-- --:--:-- 1615 100 10.7M 100 10.7M 0 0 15.2M 0 --:--:-- --:--:-- --:--:-- 15.2M ~/nightlyrpmlFn008/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmlFn008/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmlFn008/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmlFn008 ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmlFn008/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmlFn008/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 262abb3c3a90499fbe98935ed6a2c9c3 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ildTiH:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1213380514555794287.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done bac21a1a +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 107 | n43.pufty | 172.19.3.107 | pufty | 3561 | Deployed | bac21a1a | None | None | 7 | x86_64 | 1 | 2420 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon May 13 00:51:00 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:51:00 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10068 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <295770854.19.1557708660944.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10068 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10068/ to view the results. From ci at centos.org Mon May 13 00:51:51 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:51:51 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10069 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <295770854.19.1557708660944.JavaMail.jenkins@jenkins.ci.centos.org> References: <295770854.19.1557708660944.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1596543828.21.1557708711345.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10069 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10069/ to view the results. From ci at centos.org Mon May 13 00:52:37 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:52:37 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10070 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1596543828.21.1557708711345.JavaMail.jenkins@jenkins.ci.centos.org> References: <1596543828.21.1557708711345.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <302350748.23.1557708758041.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10070 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10070/ to view the results. From ci at centos.org Mon May 13 00:53:08 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:53:08 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10071 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <302350748.23.1557708758041.JavaMail.jenkins@jenkins.ci.centos.org> References: <302350748.23.1557708758041.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <160981473.25.1557708789016.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10071 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10071/ to view the results. From ci at centos.org Mon May 13 00:53:41 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:53:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10072 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <160981473.25.1557708789016.JavaMail.jenkins@jenkins.ci.centos.org> References: <160981473.25.1557708789016.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1728502219.27.1557708822150.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10072 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10072/ to view the results. From ci at centos.org Mon May 13 00:54:13 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 00:54:13 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10073 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1728502219.27.1557708822150.JavaMail.jenkins@jenkins.ci.centos.org> References: <1728502219.27.1557708822150.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <596162652.29.1557708854245.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10073 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10073/ to view the results. From ci at centos.org Mon May 13 01:02:09 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 01:02:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #191 In-Reply-To: <719469240.3882.1557623140122.JavaMail.jenkins@jenkins.ci.centos.org> References: <719469240.3882.1557623140122.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <737693500.31.1557709329893.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.46 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 13 01:02:29 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 13 May 2019 01:02:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #166 In-Reply-To: <50586485.3881.1557622532892.JavaMail.jenkins@jenkins.ci.centos.org> References: <50586485.3881.1557622532892.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <18196753.32.1557709349504.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 398.67 KB...] TASK [network_plugin/contiv : Contiv | Copy the generated certificate on nodes] *** Monday 13 May 2019 01:51:50 +0100 (0:00:00.174) 0:14:04.978 ************ TASK [network_plugin/contiv : Contiv | Set cni directory permissions] ********** Monday 13 May 2019 01:51:50 +0100 (0:00:00.388) 0:14:05.367 ************ TASK [network_plugin/contiv : Contiv | Copy cni plugins] *********************** Monday 13 May 2019 01:51:51 +0100 (0:00:00.306) 0:14:05.673 ************ TASK [network_plugin/contiv : Contiv | Copy netctl binary from docker container] *** Monday 13 May 2019 01:51:51 +0100 (0:00:00.264) 0:14:05.937 ************ TASK [network_plugin/kube-router : kube-router | Add annotations on kube-master] *** Monday 13 May 2019 01:51:51 +0100 (0:00:00.265) 0:14:06.202 ************ TASK [network_plugin/kube-router : kube-router | Add annotations on kube-node] *** Monday 13 May 2019 01:51:52 +0100 (0:00:00.293) 0:14:06.496 ************ TASK [network_plugin/kube-router : kube-router | Add common annotations on all servers] *** Monday 13 May 2019 01:51:52 +0100 (0:00:00.260) 0:14:06.757 ************ TASK [network_plugin/kube-router : kube-roter | Set cni directory permissions] *** Monday 13 May 2019 01:51:52 +0100 (0:00:00.376) 0:14:07.133 ************ TASK [network_plugin/kube-router : kube-router | Copy cni plugins] ************* Monday 13 May 2019 01:51:52 +0100 (0:00:00.301) 0:14:07.435 ************ TASK [network_plugin/kube-router : kube-router | Create manifest] ************** Monday 13 May 2019 01:51:53 +0100 (0:00:00.292) 0:14:07.728 ************ TASK [network_plugin/cloud : Cloud | Set cni directory permissions] ************ Monday 13 May 2019 01:51:53 +0100 (0:00:00.310) 0:14:08.039 ************ TASK [network_plugin/cloud : Canal | Copy cni plugins] ************************* Monday 13 May 2019 01:51:53 +0100 (0:00:00.287) 0:14:08.326 ************ TASK [network_plugin/multus : Multus | Copy manifest files] ******************** Monday 13 May 2019 01:51:54 +0100 (0:00:00.267) 0:14:08.594 ************ TASK [network_plugin/multus : Multus | Copy manifest templates] **************** Monday 13 May 2019 01:51:54 +0100 (0:00:00.357) 0:14:08.952 ************ RUNNING HANDLER [kubernetes/kubeadm : restart kubelet] ************************* Monday 13 May 2019 01:51:54 +0100 (0:00:00.234) 0:14:09.186 ************ changed: [kube3] PLAY [kube-master[0]] ********************************************************** TASK [download : include_tasks] ************************************************ Monday 13 May 2019 01:51:56 +0100 (0:00:01.505) 0:14:10.692 ************ TASK [download : Download items] *********************************************** Monday 13 May 2019 01:51:56 +0100 (0:00:00.154) 0:14:10.847 ************ TASK [download : Sync container] *********************************************** Monday 13 May 2019 01:51:58 +0100 (0:00:01.686) 0:14:12.533 ************ TASK [download : include_tasks] ************************************************ Monday 13 May 2019 01:51:59 +0100 (0:00:01.543) 0:14:14.077 ************ TASK [kubespray-defaults : Configure defaults] ********************************* Monday 13 May 2019 01:51:59 +0100 (0:00:00.185) 0:14:14.262 ************ ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token name] *** Monday 13 May 2019 01:52:00 +0100 (0:00:00.550) 0:14:14.812 ************ ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get default token data] *** Monday 13 May 2019 01:52:01 +0100 (0:00:01.200) 0:14:16.012 ************ ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Test if default certificate is expired] *** Monday 13 May 2019 01:52:02 +0100 (0:00:01.269) 0:14:17.282 ************ ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Determine if certificate is expired] *** Monday 13 May 2019 01:52:04 +0100 (0:00:01.804) 0:14:19.087 ************ ok: [kube1] TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Get all serviceaccount tokens to expire] *** Monday 13 May 2019 01:52:05 +0100 (0:00:00.464) 0:14:19.551 ************ TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete expired tokens] *** Monday 13 May 2019 01:52:05 +0100 (0:00:00.131) 0:14:19.682 ************ TASK [kubernetes-apps/rotate_tokens : Rotate Tokens | Delete pods in system namespace] *** Monday 13 May 2019 01:52:05 +0100 (0:00:00.143) 0:14:19.826 ************ TASK [win_nodes/kubernetes_patch : Ensure that user manifests directory exists] *** Monday 13 May 2019 01:52:05 +0100 (0:00:00.150) 0:14:19.977 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset hostnameOverride patch] *** Monday 13 May 2019 01:52:06 +0100 (0:00:00.982) 0:14:20.960 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current command for kube-proxy daemonset] *** Monday 13 May 2019 01:52:08 +0100 (0:00:02.258) 0:14:23.219 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply hostnameOverride patch for kube-proxy daemonset] *** Monday 13 May 2019 01:52:10 +0100 (0:00:01.535) 0:14:24.754 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Monday 13 May 2019 01:52:11 +0100 (0:00:01.381) 0:14:26.136 ************ ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Monday 13 May 2019 01:52:12 +0100 (0:00:00.523) 0:14:26.659 ************ ok: [kube1] => { "msg": [] } TASK [win_nodes/kubernetes_patch : Copy kube-proxy daemonset nodeselector patch] *** Monday 13 May 2019 01:52:12 +0100 (0:00:00.359) 0:14:27.019 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : Check current nodeselector for kube-proxy daemonset] *** Monday 13 May 2019 01:52:14 +0100 (0:00:02.099) 0:14:29.119 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : Apply nodeselector patch for kube-proxy daemonset] *** Monday 13 May 2019 01:52:15 +0100 (0:00:01.272) 0:14:30.391 ************ changed: [kube1] TASK [win_nodes/kubernetes_patch : debug] ************************************** Monday 13 May 2019 01:52:17 +0100 (0:00:01.427) 0:14:31.819 ************ ok: [kube1] => { "msg": [ "daemonset.extensions/kube-proxy patched" ] } TASK [win_nodes/kubernetes_patch : debug] ************************************** Monday 13 May 2019 01:52:17 +0100 (0:00:00.386) 0:14:32.206 ************ ok: [kube1] => { "msg": [] } PLAY [kube-master] ************************************************************* TASK [download : include_tasks] ************************************************ Monday 13 May 2019 01:52:18 +0100 (0:00:00.543) 0:14:32.749 ************ TASK [download : Download items] *********************************************** Monday 13 May 2019 01:52:18 +0100 (0:00:00.189) 0:14:32.938 ************ TASK [download : Sync container] *********************************************** Monday 13 May 2019 01:52:20 +0100 (0:00:01.669) 0:14:34.608 ************ TASK [download : include_tasks] ************************************************ Monday 13 May 2019 01:52:21 +0100 (0:00:01.768) 0:14:36.377 ************ TASK [kubespray-defaults : Configure defaults] ********************************* Monday 13 May 2019 01:52:22 +0100 (0:00:00.223) 0:14:36.600 ************ ok: [kube1] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube2] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [kubernetes-apps/network_plugin/cilium : Cilium | Start Resources] ******** Monday 13 May 2019 01:52:22 +0100 (0:00:00.568) 0:14:37.169 ************ TASK [kubernetes-apps/network_plugin/cilium : Cilium | Wait for pods to run] *** Monday 13 May 2019 01:52:23 +0100 (0:00:00.449) 0:14:37.618 ************ TASK [kubernetes-apps/network_plugin/calico : Start Calico resources] ********** Monday 13 May 2019 01:52:23 +0100 (0:00:00.209) 0:14:37.828 ************ TASK [kubernetes-apps/network_plugin/calico : calico upgrade complete] ********* Monday 13 May 2019 01:52:23 +0100 (0:00:00.174) 0:14:38.003 ************ TASK [kubernetes-apps/network_plugin/canal : Canal | Start Resources] ********** Monday 13 May 2019 01:52:23 +0100 (0:00:00.238) 0:14:38.241 ************ TASK [kubernetes-apps/network_plugin/flannel : Flannel | Start Resources] ****** Monday 13 May 2019 01:52:24 +0100 (0:00:00.439) 0:14:38.680 ************ ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'973704ff91b4c9341dccaf1da6003177', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 836, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708698.88-87612009870434/source', u'group': u'root', '_ansible_item_label': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, 'item': {u'type': u'sa', u'name': u'flannel', u'file': u'cni-flannel-rbac.yml'}, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel-rbac.yml', u'selevel': None, u'_original_basename': u'cni-flannel-rbac.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708698.88-87612009870434/source', u'group': None, u'unsafe_writes': None, u'checksum': u'8c69db180ab422f55a122372bee4620dfb2ad0ed', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) ok: [kube1] => (item={'_ansible_parsed': True, u'md5sum': u'51829ca2a2d540389c94291f63118112', u'uid': 0, u'dest': u'/etc/kubernetes/cni-flannel.yml', '_ansible_item_result': True, '_ansible_no_log': False, u'owner': u'root', 'diff': [], u'size': 3198, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708700.48-112660941555190/source', u'group': u'root', '_ansible_item_label': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, 'item': {u'type': u'ds', u'name': u'kube-flannel', u'file': u'cni-flannel.yml'}, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'changed': True, 'failed': False, u'state': u'file', u'gid': 0, u'secontext': u'system_u:object_r:etc_t:s0', u'mode': u'0644', u'invocation': {u'module_args': {u'directory_mode': None, u'force': True, u'remote_src': None, u'dest': u'/etc/kubernetes/cni-flannel.yml', u'selevel': None, u'_original_basename': u'cni-flannel.yml.j2', u'delimiter': None, u'regexp': None, u'owner': None, u'follow': False, u'validate': None, u'local_follow': None, u'src': u'/home/vagrant/.ansible/tmp/ansible-tmp-1557708700.48-112660941555190/source', u'group': None, u'unsafe_writes': None, u'checksum': u'0b1393229c9e863d63eff80c96bda56568b58e82', u'seuser': None, u'serole': None, u'content': None, u'setype': None, u'mode': None, u'attributes': None, u'backup': False}}, '_ansible_ignore_errors': None}) TASK [kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence] *** Monday 13 May 2019 01:52:27 +0100 (0:00:03.054) 0:14:41.735 ************ ok: [kube1] fatal: [kube2]: FAILED! => {"changed": false, "elapsed": 600, "msg": "Timeout when waiting for file /run/flannel/subnet.env"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=364 changed=103 unreachable=0 failed=0 kube2 : ok=315 changed=91 unreachable=0 failed=1 kube3 : ok=282 changed=78 unreachable=0 failed=0 Monday 13 May 2019 02:02:29 +0100 (0:10:01.810) 0:24:43.545 ************ =============================================================================== kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence - 601.81s kubernetes/master : kubeadm | Initialize first master ------------------ 40.76s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.34s download : container_download | download images for kubeadm config images -- 33.44s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.77s Install packages ------------------------------------------------------- 32.24s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.91s Wait for host to be available ------------------------------------------ 20.78s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.53s gather facts from all instances ---------------------------------------- 14.35s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.23s container-engine/docker : Docker | pause while Docker restarts --------- 10.41s download : file_download | Download item ------------------------------- 10.05s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 9.38s kubernetes/master : slurp kubeadm certs --------------------------------- 8.14s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 7.44s etcd : Configure | Check if etcd cluster is healthy --------------------- 6.01s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 5.99s Persist loaded modules -------------------------------------------------- 5.07s download : container_download | Download containers if pull is required or told to always pull (all nodes) --- 5.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 14 00:15:58 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:15:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #363 In-Reply-To: <122405230.15.1557706555429.JavaMail.jenkins@jenkins.ci.centos.org> References: <122405230.15.1557706555429.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1508316559.110.1557792958132.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.37 KB...] Total 68 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1676 0 --:--:-- --:--:-- --:--:-- 1680 0 8513k 0 16360 0 0 33336 0 0:04:21 --:--:-- 0:04:21 33336100 8513k 100 8513k 0 0 13.9M 0 --:--:-- --:--:-- --:--:-- 80.5M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2186 0 --:--:-- --:--:-- --:--:-- 2192 100 38.3M 100 38.3M 0 0 45.8M 0 --:--:-- --:--:-- --:--:-- 45.8M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 554 0 --:--:-- --:--:-- --:--:-- 556 0 0 0 620 0 0 1584 0 --:--:-- --:--:-- --:--:-- 1584 13 10.7M 13 1461k 0 0 2444k 0 0:00:04 --:--:-- 0:00:04 2444k100 10.7M 100 10.7M 0 0 11.3M 0 --:--:-- --:--:-- --:--:-- 26.5M ~/nightlyrpmxzrRMI/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmxzrRMI/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmxzrRMI/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmxzrRMI ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmxzrRMI/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmxzrRMI/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 23 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 3ee4a0d81347495ca47a019293dc5bcc -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.ASTUdL:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins685174238087657620.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done bfe051d0 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 177 | n50.crusty | 172.19.2.50 | crusty | 3567 | Deployed | bfe051d0 | None | None | 7 | x86_64 | 1 | 2490 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue May 14 00:50:49 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:50:49 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10076 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <2038118803.112.1557795049740.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10076 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10076/ to view the results. From ci at centos.org Tue May 14 00:51:22 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:51:22 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10077 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <2038118803.112.1557795049740.JavaMail.jenkins@jenkins.ci.centos.org> References: <2038118803.112.1557795049740.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <579757722.114.1557795082244.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10077 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10077/ to view the results. From ci at centos.org Tue May 14 00:51:53 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:51:53 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10078 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <579757722.114.1557795082244.JavaMail.jenkins@jenkins.ci.centos.org> References: <579757722.114.1557795082244.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <595041717.116.1557795113505.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10078 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10078/ to view the results. From ci at centos.org Tue May 14 00:52:23 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:52:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10079 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <595041717.116.1557795113505.JavaMail.jenkins@jenkins.ci.centos.org> References: <595041717.116.1557795113505.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1202224042.118.1557795143554.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10079 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10079/ to view the results. From ci at centos.org Tue May 14 00:52:53 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:52:53 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10080 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1202224042.118.1557795143554.JavaMail.jenkins@jenkins.ci.centos.org> References: <1202224042.118.1557795143554.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1885937818.120.1557795174145.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10080 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10080/ to view the results. From ci at centos.org Tue May 14 00:53:42 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 00:53:42 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10081 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <1885937818.120.1557795174145.JavaMail.jenkins@jenkins.ci.centos.org> References: <1885937818.120.1557795174145.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1486025921.122.1557795223001.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10081 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10081/ to view the results. From ci at centos.org Tue May 14 01:01:43 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 01:01:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #192 In-Reply-To: <737693500.31.1557709329893.JavaMail.jenkins@jenkins.ci.centos.org> References: <737693500.31.1557709329893.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1633214447.124.1557795703085.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.82 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 14 01:04:46 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 01:04:46 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #167 In-Reply-To: <18196753.32.1557709349504.JavaMail.jenkins@jenkins.ci.centos.org> References: <18196753.32.1557709349504.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1225387316.125.1557795886789.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Tue May 14 06:10:48 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:10:48 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10085 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <1915468375.150.1557814249061.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10085 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10085/ to view the results. From ci at centos.org Tue May 14 06:10:52 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:10:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10084 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1377897876.152.1557814252804.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10084 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10084/ to view the results. From ci at centos.org Tue May 14 06:13:41 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:13:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10086 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1915468375.150.1557814249061.JavaMail.jenkins@jenkins.ci.centos.org> References: <1915468375.150.1557814249061.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1762923330.155.1557814421723.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10086 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10086/ to view the results. From ci at centos.org Tue May 14 06:13:41 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:13:41 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10087 - Still Failing! (release-5 on CentOS-7/x86_64) Message-ID: <760967778.156.1557814421733.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10087 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10087/ to view the results. From ci at centos.org Tue May 14 06:16:14 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:16:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10089 - Failure! (release-6 on CentOS-6/x86_64) Message-ID: <1503098806.159.1557814574865.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10089 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10089/ to view the results. From ci at centos.org Tue May 14 06:16:14 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 06:16:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10088 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <760967778.156.1557814421733.JavaMail.jenkins@jenkins.ci.centos.org> References: <760967778.156.1557814421733.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <825109586.160.1557814574912.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10088 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10088/ to view the results. From ci at centos.org Tue May 14 08:15:30 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:15:30 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10091 - Failure! (master on CentOS-6/x86_64) Message-ID: <255548197.164.1557821731350.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10091 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10091/ to view the results. From ci at centos.org Tue May 14 08:15:32 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:15:32 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10090 - Still Failing! (master on CentOS-7/x86_64) In-Reply-To: <1503098806.159.1557814574865.JavaMail.jenkins@jenkins.ci.centos.org> References: <1503098806.159.1557814574865.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1788969050.166.1557821732820.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10090 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10090/ to view the results. From ci at centos.org Tue May 14 08:15:32 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:15:32 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10093 - Failure! (release-4.1 on CentOS-7/x86_64) Message-ID: <1687375844.169.1557821733130.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10093 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10093/ to view the results. From ci at centos.org Tue May 14 08:15:33 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:15:33 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10092 - Still Failing! (release-4.1 on CentOS-6/x86_64) In-Reply-To: <255548197.164.1557821731350.JavaMail.jenkins@jenkins.ci.centos.org> References: <255548197.164.1557821731350.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1980212305.170.1557821733186.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10092 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10092/ to view the results. From ci at centos.org Tue May 14 08:17:14 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:17:14 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10096 - Failure! (release-6 on CentOS-7/x86_64) Message-ID: <15333656.172.1557821834322.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10096 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10096/ to view the results. From ci at centos.org Tue May 14 08:18:08 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:18:08 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10097 - Still Failing! (release-6 on CentOS-6/x86_64) In-Reply-To: <15333656.172.1557821834322.JavaMail.jenkins@jenkins.ci.centos.org> References: <15333656.172.1557821834322.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1049953829.174.1557821888570.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10097 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10097/ to view the results. From ci at centos.org Tue May 14 08:18:10 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:18:10 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10094 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1687375844.169.1557821733130.JavaMail.jenkins@jenkins.ci.centos.org> References: <1687375844.169.1557821733130.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <459867936.176.1557821890629.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10094 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10094/ to view the results. From ci at centos.org Tue May 14 08:18:12 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 14 May 2019 08:18:12 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10095 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <459867936.176.1557821890629.JavaMail.jenkins@jenkins.ci.centos.org> References: <459867936.176.1557821890629.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1499833844.178.1557821892934.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10095 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10095/ to view the results. From ci at centos.org Wed May 15 00:15:57 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:15:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #364 In-Reply-To: <1508316559.110.1557792958132.JavaMail.jenkins@jenkins.ci.centos.org> References: <1508316559.110.1557792958132.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1457029188.262.1557879357165.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] nightly-builds: fix shallow cloning of git repository for non-master [amarts] nightly-builds: create a VERSION file to not need tags in the git repo [amarts] gluster-blockd: configure without tirpc on CentOS (#60) ------------------------------------------ [...truncated 37.41 KB...] Total 59 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : elfutils-0.172-2.el7.x86_64 8/49 Installing : pakchois-0.4-10.el7.x86_64 9/49 Installing : unzip-6.0-19.el7.x86_64 10/49 Installing : dwz-0.11-3.el7.x86_64 11/49 Installing : bzip2-1.0.6-13.el7.x86_64 12/49 Installing : usermode-1.111-5.el7.x86_64 13/49 Installing : patch-2.7.1-10.el7_5.x86_64 14/49 Installing : python-backports-1.0-8.el7.x86_64 15/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 16/49 Installing : python-urllib3-1.10.2-5.el7.noarch 17/49 Installing : python-requests-2.6.0-1.el7_1.noarch 18/49 Installing : python-babel-0.9.6-8.el7.noarch 19/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 20/49 Installing : mock-core-configs-30.2-1.el7.noarch 21/49 Installing : libmodman-2.0.1-8.el7.x86_64 22/49 Installing : libproxy-0.4.11-11.el7.x86_64 23/49 Installing : python-markupsafe-0.11-10.el7.x86_64 24/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 25/49 Installing : python2-distro-1.2.0-3.el7.noarch 26/49 Installing : gdb-7.6.1-114.el7.x86_64 27/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 28/49 Installing : perl-srpm-macros-1-8.el7.noarch 29/49 Installing : pigz-2.3.4-1.el7.x86_64 30/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 31/49 Installing : golang-src-1.11.5-1.el7.noarch 32/49 Installing : kernel-headers-3.10.0-957.12.1.el7.x86_64 33/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 34/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 35/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : kernel-headers-3.10.0-957.12.1.el7.x86_64 10/49 Verifying : golang-src-1.11.5-1.el7.noarch 11/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 12/49 Verifying : pigz-2.3.4-1.el7.x86_64 13/49 Verifying : perl-srpm-macros-1-8.el7.noarch 14/49 Verifying : golang-1.11.5-1.el7.x86_64 15/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 16/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 17/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 18/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 19/49 Verifying : gdb-7.6.1-114.el7.x86_64 20/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 21/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 22/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 23/49 Verifying : python2-distro-1.2.0-3.el7.noarch 24/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 25/49 Verifying : libmodman-2.0.1-8.el7.x86_64 26/49 Verifying : mpfr-3.1.1-4.el7.x86_64 27/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 28/49 Verifying : python-babel-0.9.6-8.el7.noarch 29/49 Verifying : mock-1.4.15-1.el7.noarch 30/49 Verifying : apr-util-1.5.2-6.el7.x86_64 31/49 Verifying : python-backports-1.0-8.el7.x86_64 32/49 Verifying : patch-2.7.1-10.el7_5.x86_64 33/49 Verifying : libmpc-1.0.1-3.el7.x86_64 34/49 Verifying : usermode-1.111-5.el7.x86_64 35/49 Verifying : python-six-1.9.0-2.el7.noarch 36/49 Verifying : libproxy-0.4.11-11.el7.x86_64 37/49 Verifying : neon-0.30.0-3.el7.x86_64 38/49 Verifying : mock-core-configs-30.2-1.el7.noarch 39/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 40/49 Verifying : bzip2-1.0.6-13.el7.x86_64 41/49 Verifying : subversion-1.7.14-14.el7.x86_64 42/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 43/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 44/49 Verifying : dwz-0.11-3.el7.x86_64 45/49 Verifying : unzip-6.0-19.el7.x86_64 46/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 47/49 Verifying : pakchois-0.4-10.el7.x86_64 48/49 Verifying : elfutils-0.172-2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.1.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1927 0 --:--:-- --:--:-- --:--:-- 1926 100 8513k 100 8513k 0 0 13.0M 0 --:--:-- --:--:-- --:--:-- 13.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2150 0 --:--:-- --:--:-- --:--:-- 2154 46 38.3M 46 17.9M 0 0 27.9M 0 0:00:01 --:--:-- 0:00:01 27.9M100 38.3M 100 38.3M 0 0 44.7M 0 --:--:-- --:--:-- --:--:-- 95.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 547 0 --:--:-- --:--:-- --:--:-- 548 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1505 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 14.0M 0 --:--:-- --:--:-- --:--:-- 14.0M ~/nightlyrpmzfRAcR/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmzfRAcR/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmzfRAcR/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmzfRAcR ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmzfRAcR/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmzfRAcR/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 25 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ffb164a032c2444180d8fc5ec8d0b9e5 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.X8mqD3:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins676371508730803287.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d7d9ae83 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 121 | n57.pufty | 172.19.3.121 | pufty | 3571 | Deployed | d7d9ae83 | None | None | 7 | x86_64 | 1 | 2560 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed May 15 00:52:45 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:52:45 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10098 - Still Failing! (master on CentOS-7/x86_64) In-Reply-To: <1049953829.174.1557821888570.JavaMail.jenkins@jenkins.ci.centos.org> References: <1049953829.174.1557821888570.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <839539819.280.1557881566080.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10098 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10098/ to view the results. From ci at centos.org Wed May 15 00:52:45 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:52:45 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10099 - Still Failing! (master on CentOS-6/x86_64) Message-ID: <103348961.281.1557881566161.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10099 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10099/ to view the results. From ci at centos.org Wed May 15 00:52:52 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:52:52 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10100 - Still Failing! (release-4.1 on CentOS-6/x86_64) In-Reply-To: <103348961.281.1557881566161.JavaMail.jenkins@jenkins.ci.centos.org> References: <103348961.281.1557881566161.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <778505357.283.1557881573141.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10100 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10100/ to view the results. From ci at centos.org Wed May 15 00:55:26 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:55:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10102 - Failure! (release-5 on CentOS-6/x86_64) Message-ID: <1383795635.285.1557881726593.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10102 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10102/ to view the results. From ci at centos.org Wed May 15 00:55:26 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:55:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10103 - Still Failing! (release-5 on CentOS-7/x86_64) In-Reply-To: <1383795635.285.1557881726593.JavaMail.jenkins@jenkins.ci.centos.org> References: <1383795635.285.1557881726593.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1392152113.289.1557881726886.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10103 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10103/ to view the results. From ci at centos.org Wed May 15 00:55:26 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:55:26 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10101 - Still Failing! (release-4.1 on CentOS-7/x86_64) In-Reply-To: <778505357.283.1557881573141.JavaMail.jenkins@jenkins.ci.centos.org> References: <778505357.283.1557881573141.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1771124474.288.1557881726883.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10101 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10101/ to view the results. From ci at centos.org Wed May 15 00:56:36 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:56:36 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10105 - Still Failing! (release-6 on CentOS-6/x86_64) Message-ID: <1608692882.293.1557881796386.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10105 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10105/ to view the results. From ci at centos.org Wed May 15 00:56:36 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 00:56:36 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10104 - Still Failing! (release-6 on CentOS-7/x86_64) In-Reply-To: <1392152113.289.1557881726886.JavaMail.jenkins@jenkins.ci.centos.org> References: <1392152113.289.1557881726886.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1622939296.292.1557881796360.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10104 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10104/ to view the results. From ci at centos.org Wed May 15 01:03:25 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 01:03:25 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #193 In-Reply-To: <1633214447.124.1557795703085.JavaMail.jenkins@jenkins.ci.centos.org> References: <1633214447.124.1557795703085.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <235402242.294.1557882205213.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] nightly-builds: fix shallow cloning of git repository for non-master [amarts] nightly-builds: create a VERSION file to not need tags in the git repo [amarts] gluster-blockd: configure without tirpc on CentOS (#60) ------------------------------------------ [...truncated 55.53 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 15 01:07:14 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 01:07:14 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #168 Message-ID: <49864873.295.1557882434560.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] nightly-builds: fix shallow cloning of git repository for non-master [amarts] nightly-builds: create a VERSION file to not need tags in the git repo [amarts] gluster-blockd: configure without tirpc on CentOS (#60) ------------------------------------------ [...truncated 459.54 KB...] changed: [kube1] => (item=gcs-etcd-operator.yml) changed: [kube1] => (item=gcs-etcd-cluster.yml) changed: [kube1] => (item=gcs-gd2-services.yml) changed: [kube1] => (item=gcs-fs-csi.yml) changed: [kube1] => (item=gcs-storage-snapshot.yml) changed: [kube1] => (item=gcs-virtblock-csi.yml) changed: [kube1] => (item=gcs-storage-virtblock.yml) changed: [kube1] => (item=gcs-prometheus-operator.yml) changed: [kube1] => (item=gcs-prometheus-bundle.yml) changed: [kube1] => (item=gcs-prometheus-alertmanager-cluster.yml) changed: [kube1] => (item=gcs-prometheus-operator-metrics.yml) changed: [kube1] => (item=gcs-prometheus-kube-state-metrics.yml) changed: [kube1] => (item=gcs-prometheus-node-exporter.yml) changed: [kube1] => (item=gcs-prometheus-kube-metrics.yml) changed: [kube1] => (item=gcs-prometheus-etcd.yml) changed: [kube1] => (item=gcs-grafana.yml) changed: [kube1] => (item=gcs-operator-crd.yml) changed: [kube1] => (item=gcs-operator.yml) changed: [kube1] => (item=gcs-mixins.yml) TASK [GCS Pre | Manifests | Create GD2 manifests] ****************************** Wednesday 15 May 2019 01:55:32 +0100 (0:00:34.996) 0:18:03.998 ********* included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 included: /root/gcs/deploy/tasks/create-gd2-manifests.yml for kube1 TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Set fact kube_hostname] *** Wednesday 15 May 2019 01:55:32 +0100 (0:00:00.247) 0:18:04.245 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube1 | Create gcs-gd2-kube1.yml] *** Wednesday 15 May 2019 01:55:32 +0100 (0:00:00.409) 0:18:04.655 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Set fact kube_hostname] *** Wednesday 15 May 2019 01:55:34 +0100 (0:00:02.081) 0:18:06.737 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube2 | Create gcs-gd2-kube2.yml] *** Wednesday 15 May 2019 01:55:35 +0100 (0:00:00.383) 0:18:07.120 ********* changed: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Set fact kube_hostname] *** Wednesday 15 May 2019 01:55:37 +0100 (0:00:02.135) 0:18:09.256 ********* ok: [kube1] TASK [GCS Pre | Manifests | Create GD2 manifests for kube3 | Create gcs-gd2-kube3.yml] *** Wednesday 15 May 2019 01:55:37 +0100 (0:00:00.384) 0:18:09.640 ********* changed: [kube1] TASK [GCS | Namespace | Create GCS namespace] ********************************** Wednesday 15 May 2019 01:55:39 +0100 (0:00:01.983) 0:18:11.623 ********* ok: [kube1] TASK [GCS | ETCD Operator | Deploy etcd-operator] ****************************** Wednesday 15 May 2019 01:55:41 +0100 (0:00:01.616) 0:18:13.240 ********* ok: [kube1] TASK [GCS | ETCD Operator | Wait for etcd-operator to be available] ************ Wednesday 15 May 2019 01:55:43 +0100 (0:00:01.763) 0:18:15.004 ********* FAILED - RETRYING: GCS | ETCD Operator | Wait for etcd-operator to be available (50 retries left). changed: [kube1] TASK [GCS | Anthill | Register CRDs] ******************************************* Wednesday 15 May 2019 01:55:55 +0100 (0:00:12.069) 0:18:27.073 ********* ok: [kube1] TASK [Wait for GlusterCluster CRD to be registered] **************************** Wednesday 15 May 2019 01:55:56 +0100 (0:00:01.545) 0:18:28.619 ********* ok: [kube1] TASK [Wait for GlusterNode CRD to be registered] ******************************* Wednesday 15 May 2019 01:55:57 +0100 (0:00:01.195) 0:18:29.815 ********* ok: [kube1] TASK [GCS | Anthill | Deploy operator] ***************************************** Wednesday 15 May 2019 01:55:59 +0100 (0:00:01.258) 0:18:31.074 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Deploy etcd-cluster] ******************************** Wednesday 15 May 2019 01:56:00 +0100 (0:00:01.614) 0:18:32.688 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Get etcd-client service] **************************** Wednesday 15 May 2019 01:56:02 +0100 (0:00:01.914) 0:18:34.602 ********* changed: [kube1] TASK [GCS | ETCD Cluster | Set etcd_client_endpoint] *************************** Wednesday 15 May 2019 01:56:03 +0100 (0:00:01.181) 0:18:35.784 ********* ok: [kube1] TASK [GCS | ETCD Cluster | Wait for etcd-cluster to become ready] ************** Wednesday 15 May 2019 01:56:04 +0100 (0:00:00.350) 0:18:36.135 ********* FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | ETCD Cluster | Wait for etcd-cluster to become ready (44 retries left). ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2 services] ********************************* Wednesday 15 May 2019 01:57:28 +0100 (0:01:24.095) 0:20:00.230 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy GD2] ****************************************** Wednesday 15 May 2019 01:57:29 +0100 (0:00:01.526) 0:20:01.757 ********* included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 included: /root/gcs/deploy/tasks/deploy-gd2.yml for kube1 TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 15 May 2019 01:57:30 +0100 (0:00:00.204) 0:20:01.961 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube1] *************************** Wednesday 15 May 2019 01:57:30 +0100 (0:00:00.348) 0:20:02.310 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 15 May 2019 01:57:31 +0100 (0:00:01.440) 0:20:03.751 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube2] *************************** Wednesday 15 May 2019 01:57:32 +0100 (0:00:00.400) 0:20:04.151 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Set fact kube_hostname] ****************************** Wednesday 15 May 2019 01:57:33 +0100 (0:00:01.715) 0:20:05.867 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Deploy glusterd2 on kube3] *************************** Wednesday 15 May 2019 01:57:34 +0100 (0:00:00.301) 0:20:06.169 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Get glusterd2-client service] ************************ Wednesday 15 May 2019 01:57:35 +0100 (0:00:01.509) 0:20:07.678 ********* changed: [kube1] TASK [GCS | GD2 Cluster | Set gd2_client_endpoint] ***************************** Wednesday 15 May 2019 01:57:36 +0100 (0:00:01.238) 0:20:08.917 ********* ok: [kube1] TASK [GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready] ********** Wednesday 15 May 2019 01:57:37 +0100 (0:00:00.295) 0:20:09.213 ********* FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (50 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (49 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (48 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (47 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (46 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (45 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (44 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (43 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (42 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (41 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (40 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (39 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (38 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (37 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (36 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (35 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (34 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (33 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (32 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (31 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (30 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (29 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (28 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (27 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (26 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (25 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (24 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (23 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (22 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (21 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (20 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (19 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (18 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (17 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (16 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (15 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (14 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (13 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (12 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (11 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (10 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (9 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (8 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (7 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (6 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (5 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (4 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (3 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (2 retries left). FAILED - RETRYING: GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready (1 retries left). fatal: [kube1]: FAILED! => {"attempts": 50, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "http://10.233.58.242:24007/v1/peers"} to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=421 changed=119 unreachable=0 failed=1 kube2 : ok=320 changed=93 unreachable=0 failed=0 kube3 : ok=283 changed=78 unreachable=0 failed=0 Wednesday 15 May 2019 02:07:14 +0100 (0:09:36.881) 0:29:46.094 ********* =============================================================================== GCS | GD2 Cluster | Wait for glusterd2-cluster to become ready -------- 576.88s GCS | ETCD Cluster | Wait for etcd-cluster to become ready ------------- 84.10s kubernetes/master : kubeadm | Initialize first master ------------------ 39.48s kubernetes/master : kubeadm | Init other uninitialized masters --------- 38.93s GCS Pre | Manifests | Sync GCS manifests ------------------------------- 35.00s download : container_download | download images for kubeadm config images -- 33.63s etcd : Gen_certs | Write etcd master certs ----------------------------- 32.94s Install packages ------------------------------------------------------- 32.29s kubernetes/master : kubeadm | write out kubeadm certs ------------------ 20.64s Wait for host to be available ------------------------------------------ 20.53s kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template -- 18.70s download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.88s gather facts from all instances ---------------------------------------- 14.04s etcd : wait for etcd up ------------------------------------------------ 13.43s kubernetes-apps/ansible : Kubernetes Apps | Start Resources ------------ 13.31s etcd : Gen_certs | Gather etcd master certs ---------------------------- 13.10s GCS | ETCD Operator | Wait for etcd-operator to be available ----------- 12.07s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Apply manifests -- 11.77s kubernetes-apps/external_provisioner/local_volume_provisioner : Local Volume Provisioner | Create manifests -- 11.31s container-engine/docker : Docker | pause while Docker restarts --------- 10.37s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 15 08:35:23 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:35:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10118 - Still Failing! (release-4.1 on CentOS-7/x86_64) Message-ID: <1651957985.344.1557909324195.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10118 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10118/ to view the results. From ci at centos.org Wed May 15 08:35:23 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:35:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10116 - Failure! (master on CentOS-6/x86_64) Message-ID: <1378986150.345.1557909324202.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10116 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10116/ to view the results. From ci at centos.org Wed May 15 08:35:23 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:35:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10117 - Failure! (release-4.1 on CentOS-6/x86_64) Message-ID: <1629035357.342.1557909324096.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10117 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10117/ to view the results. From ci at centos.org Wed May 15 08:35:23 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:35:23 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10115 - Failure! (master on CentOS-7/x86_64) Message-ID: <2109888777.343.1557909324158.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10115 - Failure: Check console output at https://ci.centos.org/job/gluster_build-rpms/10115/ to view the results. From ci at centos.org Wed May 15 08:36:20 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:36:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10120 - Still Failing! (release-5 on CentOS-7/x86_64) Message-ID: <322363184.351.1557909380546.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10120 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10120/ to view the results. From ci at centos.org Wed May 15 08:36:20 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:36:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10121 - Still Failing! (release-6 on CentOS-7/x86_64) Message-ID: <1458459530.352.1557909380576.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10121 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10121/ to view the results. From ci at centos.org Wed May 15 08:36:20 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:36:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10122 - Still Failing! (release-6 on CentOS-6/x86_64) Message-ID: <1920525005.353.1557909380644.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10122 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10122/ to view the results. From ci at centos.org Wed May 15 08:36:20 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 15 May 2019 08:36:20 +0000 (UTC) Subject: [CI-results] gluster_build-rpms - Build # 10119 - Still Failing! (release-5 on CentOS-6/x86_64) In-Reply-To: <1651957985.344.1557909324195.JavaMail.jenkins@jenkins.ci.centos.org> References: <1651957985.344.1557909324195.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1650215095.350.1557909380546.JavaMail.jenkins@jenkins.ci.centos.org> gluster_build-rpms - Build # 10119 - Still Failing: Check console output at https://ci.centos.org/job/gluster_build-rpms/10119/ to view the results. From ci at centos.org Thu May 16 00:15:51 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 16 May 2019 00:15:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #365 In-Reply-To: <1457029188.262.1557879357165.JavaMail.jenkins@jenkins.ci.centos.org> References: <1457029188.262.1557879357165.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1315386107.404.1557965751686.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] nightly-builds: VERSION should start with a 'v' (#62) [amarts] nightly-builds: use date of last commit as part of the package version [dkhandel] Archive the HTMl generated file by lcov [amarts] nightly-builds: older git versions do not support --date=format:... [github] gluster-block/lcov: pass enable-tirpc=no for centos (#65) ------------------------------------------ [...truncated 37.42 KB...] Total 73 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1956 0 --:--:-- --:--:-- --:--:-- 1964 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 11.0M 0 --:--:-- --:--:-- --:--:-- 56.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1980 0 --:--:-- --:--:-- --:--:-- 1984 94 38.3M 94 36.2M 0 0 44.9M 0 --:--:-- --:--:-- --:--:-- 44.9M100 38.3M 100 38.3M 0 0 46.4M 0 --:--:-- --:--:-- --:--:-- 116M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 570 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1517 0 --:--:-- --:--:-- --:--:-- 1517 0 10.7M 0 51774 0 0 89548 0 0:02:05 --:--:-- 0:02:05 89548100 10.7M 100 10.7M 0 0 15.3M 0 --:--:-- --:--:-- --:--:-- 89.0M ~/nightlyrpm1xikAX/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm1xikAX/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm1xikAX/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm1xikAX ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm1xikAX/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm1xikAX/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 44f0791ef6284b089eb279a047b1d6a9 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.QllvgD:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2229259112643911641.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done d6bb32f8 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 138 | n11.crusty | 172.19.2.11 | crusty | 3575 | Deployed | d6bb32f8 | None | None | 7 | x86_64 | 1 | 2100 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu May 16 01:04:31 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 16 May 2019 01:04:31 +0000 (UTC) Subject: [CI-results] Jenkins build is back to normal : gluster_anteater_gcs #169 In-Reply-To: <49864873.295.1557882434560.JavaMail.jenkins@jenkins.ci.centos.org> References: <49864873.295.1557882434560.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1135599288.406.1557968671129.JavaMail.jenkins@jenkins.ci.centos.org> See From ci at centos.org Thu May 16 01:22:27 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 16 May 2019 01:22:27 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #194 In-Reply-To: <235402242.294.1557882205213.JavaMail.jenkins@jenkins.ci.centos.org> References: <235402242.294.1557882205213.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1818543528.407.1557969747199.JavaMail.jenkins@jenkins.ci.centos.org> See Changes: [amarts] nightly-builds: VERSION should start with a 'v' (#62) [amarts] nightly-builds: use date of last commit as part of the package version [dkhandel] Archive the HTMl generated file by lcov [amarts] nightly-builds: older git versions do not support --date=format:... [github] gluster-block/lcov: pass enable-tirpc=no for centos (#65) ------------------------------------------ [...truncated 55.53 KB...] changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 17 00:16:04 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 17 May 2019 00:16:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #366 In-Reply-To: <1315386107.404.1557965751686.JavaMail.jenkins@jenkins.ci.centos.org> References: <1315386107.404.1557965751686.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1191547796.479.1558052164911.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 58 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1691 0 --:--:-- --:--:-- --:--:-- 1699 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 12.3M 0 --:--:-- --:--:-- --:--:-- 26.6M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2159 0 --:--:-- --:--:-- --:--:-- 2162 39 38.3M 39 15.1M 0 0 22.6M 0 0:00:01 --:--:-- 0:00:01 22.6M100 38.3M 100 38.3M 0 0 40.1M 0 --:--:-- --:--:-- --:--:-- 81.5M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 560 0 --:--:-- --:--:-- --:--:-- 562 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1596 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.5M ~/nightlyrpmIhQeaf/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmIhQeaf/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmIhQeaf/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmIhQeaf ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmIhQeaf/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmIhQeaf/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fe8881ca86c1448cb15a22f324a5d02c -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.DFoa4W:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6640238571364412914.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 1932b8c7 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 107 | n43.pufty | 172.19.3.107 | pufty | 3578 | Deployed | 1932b8c7 | None | None | 7 | x86_64 | 1 | 2420 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri May 17 00:41:57 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 17 May 2019 00:41:57 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #170 Message-ID: <642814946.480.1558053717706.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.08 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 17 May 2019 01:41:15 +0100 (0:00:00.291) 0:02:59.659 ************ TASK [container-engine/docker : check length of search domains] **************** Friday 17 May 2019 01:41:15 +0100 (0:00:00.292) 0:02:59.952 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Friday 17 May 2019 01:41:16 +0100 (0:00:00.292) 0:03:00.244 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 17 May 2019 01:41:16 +0100 (0:00:00.289) 0:03:00.534 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 17 May 2019 01:41:17 +0100 (0:00:00.592) 0:03:01.126 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 17 May 2019 01:41:18 +0100 (0:00:01.345) 0:03:02.471 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 17 May 2019 01:41:18 +0100 (0:00:00.254) 0:03:02.726 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 17 May 2019 01:41:18 +0100 (0:00:00.255) 0:03:02.982 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 17 May 2019 01:41:19 +0100 (0:00:00.318) 0:03:03.300 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 17 May 2019 01:41:19 +0100 (0:00:00.302) 0:03:03.603 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 17 May 2019 01:41:19 +0100 (0:00:00.273) 0:03:03.876 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 17 May 2019 01:41:20 +0100 (0:00:00.275) 0:03:04.152 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 17 May 2019 01:41:20 +0100 (0:00:00.282) 0:03:04.435 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 17 May 2019 01:41:20 +0100 (0:00:00.280) 0:03:04.716 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 17 May 2019 01:41:20 +0100 (0:00:00.368) 0:03:05.084 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 17 May 2019 01:41:21 +0100 (0:00:00.331) 0:03:05.416 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 17 May 2019 01:41:21 +0100 (0:00:00.297) 0:03:05.713 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 17 May 2019 01:41:21 +0100 (0:00:00.285) 0:03:05.999 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 17 May 2019 01:41:22 +0100 (0:00:00.279) 0:03:06.279 ************ ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 17 May 2019 01:41:24 +0100 (0:00:01.921) 0:03:08.201 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 17 May 2019 01:41:25 +0100 (0:00:01.099) 0:03:09.300 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 17 May 2019 01:41:25 +0100 (0:00:00.291) 0:03:09.592 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 17 May 2019 01:41:26 +0100 (0:00:01.082) 0:03:10.675 ************ TASK [container-engine/docker : get systemd version] *************************** Friday 17 May 2019 01:41:26 +0100 (0:00:00.308) 0:03:10.983 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 17 May 2019 01:41:27 +0100 (0:00:00.299) 0:03:11.282 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 17 May 2019 01:41:27 +0100 (0:00:00.304) 0:03:11.587 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 17 May 2019 01:41:29 +0100 (0:00:02.058) 0:03:13.646 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 17 May 2019 01:41:31 +0100 (0:00:01.982) 0:03:15.628 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 17 May 2019 01:41:31 +0100 (0:00:00.326) 0:03:15.955 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 17 May 2019 01:41:32 +0100 (0:00:00.227) 0:03:16.183 ************ changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 17 May 2019 01:41:33 +0100 (0:00:00.967) 0:03:17.150 ************ changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 17 May 2019 01:41:34 +0100 (0:00:01.228) 0:03:18.379 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 17 May 2019 01:41:34 +0100 (0:00:00.332) 0:03:18.711 ************ changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 17 May 2019 01:41:38 +0100 (0:00:04.339) 0:03:23.051 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 17 May 2019 01:41:49 +0100 (0:00:10.264) 0:03:33.316 ************ changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 17 May 2019 01:41:50 +0100 (0:00:01.233) 0:03:34.549 ************ ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 17 May 2019 01:41:51 +0100 (0:00:01.242) 0:03:35.791 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 17 May 2019 01:41:52 +0100 (0:00:00.509) 0:03:36.301 ************ ok: [kube3] ok: [kube1] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 17 May 2019 01:41:53 +0100 (0:00:01.242) 0:03:37.543 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 17 May 2019 01:41:54 +0100 (0:00:00.930) 0:03:38.474 ************ TASK [download : Download items] *********************************************** Friday 17 May 2019 01:41:54 +0100 (0:00:00.103) 0:03:38.578 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 17 May 2019 01:41:57 +0100 (0:00:02.756) 0:03:41.335 ************ =============================================================================== Install packages ------------------------------------------------------- 32.75s Wait for host to be available ------------------------------------------ 24.07s gather facts from all instances ---------------------------------------- 17.64s container-engine/docker : Docker | pause while Docker restarts --------- 10.26s Persist loaded modules -------------------------------------------------- 6.05s container-engine/docker : Docker | reload docker ------------------------ 4.34s kubernetes/preinstall : Create kubernetes directories ------------------- 3.92s download : Download items ----------------------------------------------- 2.76s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.61s Load required kernel modules -------------------------------------------- 2.51s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.49s kubernetes/preinstall : Create cni directories -------------------------- 2.47s Extend root VG ---------------------------------------------------------- 2.41s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.07s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.07s container-engine/docker : Write docker options systemd drop-in ---------- 2.06s download : Download items ----------------------------------------------- 2.02s Gathering Facts --------------------------------------------------------- 2.02s container-engine/docker : Write docker dns systemd drop-in -------------- 1.98s download : Sync container ----------------------------------------------- 1.97s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 17 01:17:51 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 17 May 2019 01:17:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #195 In-Reply-To: <1818543528.407.1557969747199.JavaMail.jenkins@jenkins.ci.centos.org> References: <1818543528.407.1557969747199.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <960731127.481.1558055871793.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.22 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 18 00:13:50 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 18 May 2019 00:13:50 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #367 In-Reply-To: <1191547796.479.1558052164911.JavaMail.jenkins@jenkins.ci.centos.org> References: <1191547796.479.1558052164911.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <333938197.559.1558138430545.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.39 KB...] Total 96 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2286 0 --:--:-- --:--:-- --:--:-- 2300 62 8513k 62 5312k 0 0 9.9M 0 --:--:-- --:--:-- --:--:-- 9.9M100 8513k 100 8513k 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 115M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2901 0 --:--:-- --:--:-- --:--:-- 2916 100 38.3M 100 38.3M 0 0 50.0M 0 --:--:-- --:--:-- --:--:-- 50.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 815 0 --:--:-- --:--:-- --:--:-- 818 0 0 0 620 0 0 2141 0 --:--:-- --:--:-- --:--:-- 2141 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 19.1M 0 --:--:-- --:--:-- --:--:-- 80.7M ~/nightlyrpmCH51ok/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmCH51ok/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmCH51ok/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmCH51ok ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmCH51ok/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmCH51ok/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M ba800d98697643bfb84940b5b04ec1d4 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.4mkY12:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7826977765101876204.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ac4eaa7b +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 214 | n23.dusty | 172.19.2.87 | dusty | 3581 | Deployed | ac4eaa7b | None | None | 7 | x86_64 | 1 | 2220 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat May 18 00:41:40 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 18 May 2019 00:41:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #171 In-Reply-To: <642814946.480.1558053717706.JavaMail.jenkins@jenkins.ci.centos.org> References: <642814946.480.1558053717706.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <171252477.562.1558140100388.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.16 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 18 May 2019 01:40:58 +0100 (0:00:00.286) 0:02:57.204 ********** TASK [container-engine/docker : check length of search domains] **************** Saturday 18 May 2019 01:40:59 +0100 (0:00:00.373) 0:02:57.578 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 18 May 2019 01:40:59 +0100 (0:00:00.296) 0:02:57.875 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 18 May 2019 01:40:59 +0100 (0:00:00.285) 0:02:58.160 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 18 May 2019 01:41:00 +0100 (0:00:00.592) 0:02:58.752 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 18 May 2019 01:41:01 +0100 (0:00:01.340) 0:03:00.093 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 18 May 2019 01:41:01 +0100 (0:00:00.249) 0:03:00.343 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 18 May 2019 01:41:02 +0100 (0:00:00.252) 0:03:00.595 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 18 May 2019 01:41:02 +0100 (0:00:00.294) 0:03:00.889 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 18 May 2019 01:41:02 +0100 (0:00:00.298) 0:03:01.187 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 18 May 2019 01:41:02 +0100 (0:00:00.308) 0:03:01.496 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 18 May 2019 01:41:03 +0100 (0:00:00.276) 0:03:01.772 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 18 May 2019 01:41:03 +0100 (0:00:00.276) 0:03:02.049 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 18 May 2019 01:41:03 +0100 (0:00:00.280) 0:03:02.330 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 18 May 2019 01:41:04 +0100 (0:00:00.364) 0:03:02.695 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 18 May 2019 01:41:04 +0100 (0:00:00.345) 0:03:03.040 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 18 May 2019 01:41:04 +0100 (0:00:00.274) 0:03:03.315 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 18 May 2019 01:41:05 +0100 (0:00:00.285) 0:03:03.601 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 18 May 2019 01:41:05 +0100 (0:00:00.274) 0:03:03.875 ********** ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 18 May 2019 01:41:07 +0100 (0:00:01.910) 0:03:05.786 ********** ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 18 May 2019 01:41:08 +0100 (0:00:01.088) 0:03:06.874 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 18 May 2019 01:41:08 +0100 (0:00:00.299) 0:03:07.173 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 18 May 2019 01:41:09 +0100 (0:00:00.965) 0:03:08.138 ********** TASK [container-engine/docker : get systemd version] *************************** Saturday 18 May 2019 01:41:09 +0100 (0:00:00.305) 0:03:08.444 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 18 May 2019 01:41:10 +0100 (0:00:00.298) 0:03:08.743 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 18 May 2019 01:41:10 +0100 (0:00:00.299) 0:03:09.042 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 18 May 2019 01:41:12 +0100 (0:00:01.961) 0:03:11.004 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 18 May 2019 01:41:14 +0100 (0:00:02.052) 0:03:13.057 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 18 May 2019 01:41:14 +0100 (0:00:00.306) 0:03:13.363 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 18 May 2019 01:41:15 +0100 (0:00:00.271) 0:03:13.635 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 18 May 2019 01:41:16 +0100 (0:00:01.057) 0:03:14.693 ********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 18 May 2019 01:41:17 +0100 (0:00:01.176) 0:03:15.870 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 18 May 2019 01:41:17 +0100 (0:00:00.282) 0:03:16.152 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 18 May 2019 01:41:21 +0100 (0:00:04.090) 0:03:20.242 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 18 May 2019 01:41:31 +0100 (0:00:10.199) 0:03:30.442 ********** changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 18 May 2019 01:41:33 +0100 (0:00:01.167) 0:03:31.610 ********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 18 May 2019 01:41:34 +0100 (0:00:01.218) 0:03:32.829 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 18 May 2019 01:41:34 +0100 (0:00:00.509) 0:03:33.339 ********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 18 May 2019 01:41:36 +0100 (0:00:01.198) 0:03:34.537 ********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 18 May 2019 01:41:36 +0100 (0:00:00.939) 0:03:35.477 ********** TASK [download : Download items] *********************************************** Saturday 18 May 2019 01:41:37 +0100 (0:00:00.138) 0:03:35.615 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 18 May 2019 01:41:39 +0100 (0:00:02.894) 0:03:38.510 ********** =============================================================================== Install packages ------------------------------------------------------- 32.61s Wait for host to be available ------------------------------------------ 21.60s gather facts from all instances ---------------------------------------- 17.15s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 6.17s container-engine/docker : Docker | reload docker ------------------------ 4.09s kubernetes/preinstall : Create kubernetes directories ------------------- 4.02s download : Download items ----------------------------------------------- 2.89s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.72s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.64s Load required kernel modules -------------------------------------------- 2.62s kubernetes/preinstall : Create cni directories -------------------------- 2.54s Extend root VG ---------------------------------------------------------- 2.46s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.17s kubernetes/preinstall : Set selinux policy ------------------------------ 2.11s Gathering Facts --------------------------------------------------------- 2.08s container-engine/docker : Write docker dns systemd drop-in -------------- 2.05s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.04s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.97s container-engine/docker : Write docker options systemd drop-in ---------- 1.96s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 18 01:22:58 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 18 May 2019 01:22:58 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #196 In-Reply-To: <960731127.481.1558055871793.JavaMail.jenkins@jenkins.ci.centos.org> References: <960731127.481.1558055871793.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1762090486.564.1558142578339.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 19 00:16:00 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 19 May 2019 00:16:00 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #368 In-Reply-To: <333938197.559.1558138430545.JavaMail.jenkins@jenkins.ci.centos.org> References: <333938197.559.1558138430545.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1572012448.628.1558224960108.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.39 KB...] Total 58 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1524 0 --:--:-- --:--:-- --:--:-- 1527 100 8513k 100 8513k 0 0 9700k 0 --:--:-- --:--:-- --:--:-- 9700k Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1964 0 --:--:-- --:--:-- --:--:-- 1971 0 38.3M 0 68127 0 0 136k 0 0:04:48 --:--:-- 0:04:48 136k100 38.3M 100 38.3M 0 0 44.6M 0 --:--:-- --:--:-- --:--:-- 103M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 700 0 --:--:-- --:--:-- --:--:-- 701 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 620 0 0 1749 0 --:--:-- --:--:-- --:--:-- 605k 100 10.7M 100 10.7M 0 0 16.8M 0 --:--:-- --:--:-- --:--:-- 16.8M ~/nightlyrpmzGx2xu/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmzGx2xu/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmzGx2xu/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmzGx2xu ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmzGx2xu/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmzGx2xu/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 90a9293761be450dad2b4ae72c8d20d5 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.qBPmGw:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins8339017772843196212.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done ae09f253 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 106 | n42.pufty | 172.19.3.106 | pufty | 3585 | Deployed | ae09f253 | None | None | 7 | x86_64 | 1 | 2410 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun May 19 00:42:11 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 19 May 2019 00:42:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #172 In-Reply-To: <171252477.562.1558140100388.JavaMail.jenkins@jenkins.ci.centos.org> References: <171252477.562.1558140100388.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2047816132.629.1558226531997.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.06 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 19 May 2019 01:41:29 +0100 (0:00:00.349) 0:03:01.178 ************ TASK [container-engine/docker : check length of search domains] **************** Sunday 19 May 2019 01:41:29 +0100 (0:00:00.317) 0:03:01.495 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 19 May 2019 01:41:30 +0100 (0:00:00.299) 0:03:01.795 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 19 May 2019 01:41:30 +0100 (0:00:00.290) 0:03:02.086 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 19 May 2019 01:41:31 +0100 (0:00:00.597) 0:03:02.683 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 19 May 2019 01:41:32 +0100 (0:00:01.294) 0:03:03.978 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 19 May 2019 01:41:32 +0100 (0:00:00.248) 0:03:04.227 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 19 May 2019 01:41:32 +0100 (0:00:00.250) 0:03:04.478 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 19 May 2019 01:41:33 +0100 (0:00:00.308) 0:03:04.786 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 19 May 2019 01:41:33 +0100 (0:00:00.308) 0:03:05.095 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 19 May 2019 01:41:33 +0100 (0:00:00.287) 0:03:05.382 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 19 May 2019 01:41:34 +0100 (0:00:00.288) 0:03:05.671 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 19 May 2019 01:41:34 +0100 (0:00:00.284) 0:03:05.956 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 19 May 2019 01:41:34 +0100 (0:00:00.275) 0:03:06.232 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 19 May 2019 01:41:34 +0100 (0:00:00.362) 0:03:06.594 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 19 May 2019 01:41:35 +0100 (0:00:00.368) 0:03:06.962 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 19 May 2019 01:41:35 +0100 (0:00:00.283) 0:03:07.246 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 19 May 2019 01:41:35 +0100 (0:00:00.282) 0:03:07.528 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 19 May 2019 01:41:36 +0100 (0:00:00.297) 0:03:07.826 ************ ok: [kube3] ok: [kube1] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 19 May 2019 01:41:38 +0100 (0:00:01.967) 0:03:09.793 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 19 May 2019 01:41:39 +0100 (0:00:01.087) 0:03:10.880 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 19 May 2019 01:41:39 +0100 (0:00:00.288) 0:03:11.168 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 19 May 2019 01:41:40 +0100 (0:00:00.976) 0:03:12.145 ************ TASK [container-engine/docker : get systemd version] *************************** Sunday 19 May 2019 01:41:40 +0100 (0:00:00.329) 0:03:12.474 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 19 May 2019 01:41:41 +0100 (0:00:00.300) 0:03:12.774 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 19 May 2019 01:41:41 +0100 (0:00:00.299) 0:03:13.074 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 19 May 2019 01:41:43 +0100 (0:00:01.992) 0:03:15.066 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 19 May 2019 01:41:45 +0100 (0:00:02.175) 0:03:17.242 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 19 May 2019 01:41:45 +0100 (0:00:00.329) 0:03:17.572 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 19 May 2019 01:41:46 +0100 (0:00:00.233) 0:03:17.806 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 19 May 2019 01:41:47 +0100 (0:00:01.040) 0:03:18.847 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 19 May 2019 01:41:48 +0100 (0:00:01.276) 0:03:20.124 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 19 May 2019 01:41:48 +0100 (0:00:00.306) 0:03:20.430 ************ changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 19 May 2019 01:41:52 +0100 (0:00:04.126) 0:03:24.557 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 19 May 2019 01:42:03 +0100 (0:00:10.205) 0:03:34.762 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 19 May 2019 01:42:04 +0100 (0:00:01.383) 0:03:36.146 ************ ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 19 May 2019 01:42:05 +0100 (0:00:01.341) 0:03:37.488 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 19 May 2019 01:42:06 +0100 (0:00:00.514) 0:03:38.003 ************ ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 19 May 2019 01:42:07 +0100 (0:00:01.207) 0:03:39.210 ************ changed: [kube2] changed: [kube1] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 19 May 2019 01:42:08 +0100 (0:00:01.158) 0:03:40.368 ************ TASK [download : Download items] *********************************************** Sunday 19 May 2019 01:42:08 +0100 (0:00:00.128) 0:03:40.497 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 19 May 2019 01:42:11 +0100 (0:00:02.742) 0:03:43.239 ************ =============================================================================== Install packages ------------------------------------------------------- 32.35s Wait for host to be available ------------------------------------------ 23.96s gather facts from all instances ---------------------------------------- 17.78s container-engine/docker : Docker | pause while Docker restarts --------- 10.21s Persist loaded modules -------------------------------------------------- 6.17s kubernetes/preinstall : Create kubernetes directories ------------------- 4.24s container-engine/docker : Docker | reload docker ------------------------ 4.13s Load required kernel modules -------------------------------------------- 2.78s download : Download items ----------------------------------------------- 2.74s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.67s kubernetes/preinstall : Create cni directories -------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.49s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.42s container-engine/docker : Write docker dns systemd drop-in -------------- 2.18s download : Download items ----------------------------------------------- 2.12s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.09s Gathering Facts --------------------------------------------------------- 2.07s download : Sync container ----------------------------------------------- 2.06s container-engine/docker : Write docker options systemd drop-in ---------- 1.99s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.98s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 19 01:23:14 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 19 May 2019 01:23:14 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #197 In-Reply-To: <1762090486.564.1558142578339.JavaMail.jenkins@jenkins.ci.centos.org> References: <1762090486.564.1558142578339.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <499614617.634.1558228994343.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 20 00:13:46 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 20 May 2019 00:13:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #369 In-Reply-To: <1572012448.628.1558224960108.JavaMail.jenkins@jenkins.ci.centos.org> References: <1572012448.628.1558224960108.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1737890063.674.1558311226539.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 99 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2717 0 --:--:-- --:--:-- --:--:-- 2737 100 8513k 100 8513k 0 0 9836k 0 --:--:-- --:--:-- --:--:-- 9836k Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2899 0 --:--:-- --:--:-- --:--:-- 2889 100 38.3M 100 38.3M 0 0 43.2M 0 --:--:-- --:--:-- --:--:-- 43.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 1125 0 --:--:-- --:--:-- --:--:-- 1133 0 0 0 620 0 0 2517 0 --:--:-- --:--:-- --:--:-- 2517 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 19.9M 0 --:--:-- --:--:-- --:--:-- 68.3M ~/nightlyrpmy3uqWw/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmy3uqWw/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmy3uqWw/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmy3uqWw ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmy3uqWw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmy3uqWw/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 35e2485f379e43e3ab855550747029fc -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.5R5Lb9:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3492752627785127559.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c98d731c +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 228 | n37.dusty | 172.19.2.101 | dusty | 3588 | Deployed | c98d731c | None | None | 7 | x86_64 | 1 | 2360 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon May 20 00:41:20 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 20 May 2019 00:41:20 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #173 In-Reply-To: <2047816132.629.1558226531997.JavaMail.jenkins@jenkins.ci.centos.org> References: <2047816132.629.1558226531997.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1493441862.676.1558312880238.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.11 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 20 May 2019 01:40:37 +0100 (0:00:00.298) 0:03:05.515 ************ TASK [container-engine/docker : check length of search domains] **************** Monday 20 May 2019 01:40:38 +0100 (0:00:00.350) 0:03:05.866 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Monday 20 May 2019 01:40:38 +0100 (0:00:00.351) 0:03:06.218 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 20 May 2019 01:40:38 +0100 (0:00:00.283) 0:03:06.502 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 20 May 2019 01:40:39 +0100 (0:00:00.595) 0:03:07.097 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 20 May 2019 01:40:40 +0100 (0:00:01.296) 0:03:08.394 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 20 May 2019 01:40:41 +0100 (0:00:00.260) 0:03:08.655 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 20 May 2019 01:40:41 +0100 (0:00:00.247) 0:03:08.903 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 20 May 2019 01:40:41 +0100 (0:00:00.297) 0:03:09.200 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 20 May 2019 01:40:41 +0100 (0:00:00.309) 0:03:09.509 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 20 May 2019 01:40:42 +0100 (0:00:00.268) 0:03:09.778 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 20 May 2019 01:40:42 +0100 (0:00:00.274) 0:03:10.052 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 20 May 2019 01:40:42 +0100 (0:00:00.272) 0:03:10.325 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 20 May 2019 01:40:42 +0100 (0:00:00.275) 0:03:10.601 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 20 May 2019 01:40:43 +0100 (0:00:00.353) 0:03:10.954 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 20 May 2019 01:40:43 +0100 (0:00:00.332) 0:03:11.287 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 20 May 2019 01:40:43 +0100 (0:00:00.284) 0:03:11.571 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 20 May 2019 01:40:44 +0100 (0:00:00.281) 0:03:11.853 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 20 May 2019 01:40:44 +0100 (0:00:00.273) 0:03:12.127 ************ ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 20 May 2019 01:40:46 +0100 (0:00:01.907) 0:03:14.034 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 20 May 2019 01:40:47 +0100 (0:00:01.123) 0:03:15.158 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 20 May 2019 01:40:47 +0100 (0:00:00.295) 0:03:15.453 ************ changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 20 May 2019 01:40:48 +0100 (0:00:01.132) 0:03:16.586 ************ TASK [container-engine/docker : get systemd version] *************************** Monday 20 May 2019 01:40:49 +0100 (0:00:00.358) 0:03:16.944 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 20 May 2019 01:40:49 +0100 (0:00:00.299) 0:03:17.243 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 20 May 2019 01:40:49 +0100 (0:00:00.318) 0:03:17.561 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 20 May 2019 01:40:52 +0100 (0:00:02.074) 0:03:19.636 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 20 May 2019 01:40:53 +0100 (0:00:01.927) 0:03:21.563 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 20 May 2019 01:40:54 +0100 (0:00:00.358) 0:03:21.922 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 20 May 2019 01:40:54 +0100 (0:00:00.268) 0:03:22.190 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 20 May 2019 01:40:55 +0100 (0:00:00.894) 0:03:23.085 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 20 May 2019 01:40:56 +0100 (0:00:01.186) 0:03:24.272 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 20 May 2019 01:40:56 +0100 (0:00:00.323) 0:03:24.595 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 20 May 2019 01:41:01 +0100 (0:00:04.340) 0:03:28.935 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 20 May 2019 01:41:11 +0100 (0:00:10.217) 0:03:39.153 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 20 May 2019 01:41:12 +0100 (0:00:01.289) 0:03:40.442 ************ ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 20 May 2019 01:41:14 +0100 (0:00:01.281) 0:03:41.724 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 20 May 2019 01:41:14 +0100 (0:00:00.508) 0:03:42.233 ************ ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 20 May 2019 01:41:15 +0100 (0:00:01.249) 0:03:43.482 ************ changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 20 May 2019 01:41:16 +0100 (0:00:01.010) 0:03:44.493 ************ TASK [download : Download items] *********************************************** Monday 20 May 2019 01:41:16 +0100 (0:00:00.110) 0:03:44.603 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 20 May 2019 01:41:19 +0100 (0:00:02.755) 0:03:47.358 ************ =============================================================================== Install packages ------------------------------------------------------- 32.59s Wait for host to be available ------------------------------------------ 31.93s gather facts from all instances ---------------------------------------- 16.52s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 5.98s container-engine/docker : Docker | reload docker ------------------------ 4.34s kubernetes/preinstall : Create kubernetes directories ------------------- 3.86s download : Download items ----------------------------------------------- 2.76s Load required kernel modules -------------------------------------------- 2.60s kubernetes/preinstall : Create cni directories -------------------------- 2.49s Extend root VG ---------------------------------------------------------- 2.45s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.45s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.41s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.21s container-engine/docker : Write docker options systemd drop-in ---------- 2.07s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.03s container-engine/docker : Write docker dns systemd drop-in -------------- 1.93s container-engine/docker : ensure service is started if docker packages are already present --- 1.91s Extend the root LV and FS to occupy remaining space --------------------- 1.90s Gathering Facts --------------------------------------------------------- 1.89s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 20 01:19:46 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 20 May 2019 01:19:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #198 In-Reply-To: <499614617.634.1558228994343.JavaMail.jenkins@jenkins.ci.centos.org> References: <499614617.634.1558228994343.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <263674211.677.1558315186338.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 21 00:16:10 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 21 May 2019 00:16:10 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #370 In-Reply-To: <1737890063.674.1558311226539.JavaMail.jenkins@jenkins.ci.centos.org> References: <1737890063.674.1558311226539.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1533722114.778.1558397770512.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.40 KB...] Total 63 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1944 0 --:--:-- --:--:-- --:--:-- 1951 100 8513k 100 8513k 0 0 15.3M 0 --:--:-- --:--:-- --:--:-- 15.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2194 0 --:--:-- --:--:-- --:--:-- 2192 100 38.3M 100 38.3M 0 0 49.3M 0 --:--:-- --:--:-- --:--:-- 49.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 564 0 --:--:-- --:--:-- --:--:-- 566 0 0 0 620 0 0 1399 0 --:--:-- --:--:-- --:--:-- 1399 100 10.7M 100 10.7M 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M ~/nightlyrpmwMNv3U/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmwMNv3U/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmwMNv3U/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmwMNv3U ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmwMNv3U/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmwMNv3U/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 27 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M dc987caaee074c7d9c2a9cae3e47c7bd -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.VFDFEF:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2057812741725567780.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done bc4c1f45 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 76 | n12.pufty | 172.19.3.76 | pufty | 3592 | Deployed | bc4c1f45 | None | None | 7 | x86_64 | 1 | 2110 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue May 21 00:42:46 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 21 May 2019 00:42:46 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #174 In-Reply-To: <1493441862.676.1558312880238.JavaMail.jenkins@jenkins.ci.centos.org> References: <1493441862.676.1558312880238.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1407869471.780.1558399366605.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.23 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 21 May 2019 01:42:04 +0100 (0:00:00.294) 0:02:57.133 *********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 21 May 2019 01:42:04 +0100 (0:00:00.289) 0:02:57.423 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 21 May 2019 01:42:05 +0100 (0:00:00.291) 0:02:57.715 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 21 May 2019 01:42:05 +0100 (0:00:00.286) 0:02:58.001 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 21 May 2019 01:42:05 +0100 (0:00:00.590) 0:02:58.591 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 21 May 2019 01:42:07 +0100 (0:00:01.342) 0:02:59.934 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 21 May 2019 01:42:07 +0100 (0:00:00.251) 0:03:00.186 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 21 May 2019 01:42:07 +0100 (0:00:00.249) 0:03:00.435 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 21 May 2019 01:42:08 +0100 (0:00:00.301) 0:03:00.737 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 21 May 2019 01:42:08 +0100 (0:00:00.296) 0:03:01.033 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 21 May 2019 01:42:08 +0100 (0:00:00.280) 0:03:01.314 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 21 May 2019 01:42:08 +0100 (0:00:00.276) 0:03:01.591 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 21 May 2019 01:42:09 +0100 (0:00:00.275) 0:03:01.867 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 21 May 2019 01:42:09 +0100 (0:00:00.280) 0:03:02.147 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 21 May 2019 01:42:09 +0100 (0:00:00.371) 0:03:02.518 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 21 May 2019 01:42:10 +0100 (0:00:00.352) 0:03:02.870 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 21 May 2019 01:42:10 +0100 (0:00:00.279) 0:03:03.150 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 21 May 2019 01:42:10 +0100 (0:00:00.284) 0:03:03.434 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 21 May 2019 01:42:11 +0100 (0:00:00.290) 0:03:03.725 *********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 21 May 2019 01:42:12 +0100 (0:00:01.970) 0:03:05.696 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 21 May 2019 01:42:14 +0100 (0:00:01.110) 0:03:06.806 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 21 May 2019 01:42:14 +0100 (0:00:00.289) 0:03:07.096 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 21 May 2019 01:42:15 +0100 (0:00:00.937) 0:03:08.033 *********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 21 May 2019 01:42:15 +0100 (0:00:00.306) 0:03:08.339 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 21 May 2019 01:42:15 +0100 (0:00:00.299) 0:03:08.639 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 21 May 2019 01:42:16 +0100 (0:00:00.300) 0:03:08.940 *********** changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 21 May 2019 01:42:18 +0100 (0:00:02.099) 0:03:11.039 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 21 May 2019 01:42:20 +0100 (0:00:02.118) 0:03:13.158 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 21 May 2019 01:42:20 +0100 (0:00:00.362) 0:03:13.521 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 21 May 2019 01:42:21 +0100 (0:00:00.289) 0:03:13.810 *********** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 21 May 2019 01:42:22 +0100 (0:00:01.024) 0:03:14.835 *********** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 21 May 2019 01:42:23 +0100 (0:00:01.282) 0:03:16.118 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 21 May 2019 01:42:23 +0100 (0:00:00.282) 0:03:16.400 *********** changed: [kube3] changed: [kube2] changed: [kube1] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 21 May 2019 01:42:27 +0100 (0:00:04.251) 0:03:20.651 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube3] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 21 May 2019 01:42:38 +0100 (0:00:10.216) 0:03:30.868 *********** changed: [kube3] changed: [kube2] changed: [kube1] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 21 May 2019 01:42:39 +0100 (0:00:01.245) 0:03:32.114 *********** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 21 May 2019 01:42:40 +0100 (0:00:01.198) 0:03:33.312 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 21 May 2019 01:42:41 +0100 (0:00:00.537) 0:03:33.849 *********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 21 May 2019 01:42:42 +0100 (0:00:01.186) 0:03:35.036 *********** changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 21 May 2019 01:42:43 +0100 (0:00:01.010) 0:03:36.046 *********** TASK [download : Download items] *********************************************** Tuesday 21 May 2019 01:42:43 +0100 (0:00:00.152) 0:03:36.199 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=95 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 21 May 2019 01:42:46 +0100 (0:00:02.724) 0:03:38.924 *********** =============================================================================== Install packages ------------------------------------------------------- 32.81s Wait for host to be available ------------------------------------------ 21.38s gather facts from all instances ---------------------------------------- 17.27s container-engine/docker : Docker | pause while Docker restarts --------- 10.22s Persist loaded modules -------------------------------------------------- 6.06s container-engine/docker : Docker | reload docker ------------------------ 4.25s kubernetes/preinstall : Create kubernetes directories ------------------- 3.91s download : Download items ----------------------------------------------- 2.72s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.69s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.67s Load required kernel modules -------------------------------------------- 2.63s Extend root VG ---------------------------------------------------------- 2.56s kubernetes/preinstall : Create cni directories -------------------------- 2.43s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.18s Gathering Facts --------------------------------------------------------- 2.18s container-engine/docker : Write docker dns systemd drop-in -------------- 2.12s container-engine/docker : Write docker options systemd drop-in ---------- 2.10s download : Sync container ----------------------------------------------- 2.06s bootstrap-os : Create remote_tmp for it is used by another module ------- 2.04s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 21 01:23:11 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 21 May 2019 01:23:11 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #199 In-Reply-To: <263674211.677.1558315186338.JavaMail.jenkins@jenkins.ci.centos.org> References: <263674211.677.1558315186338.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2108185715.788.1558401791638.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 22 00:15:53 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 22 May 2019 00:15:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #371 In-Reply-To: <1533722114.778.1558397770512.JavaMail.jenkins@jenkins.ci.centos.org> References: <1533722114.778.1558397770512.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1059354321.923.1558484153205.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.41 KB...] Total 60 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1602 0 --:--:-- --:--:-- --:--:-- 1609 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 12.2M 0 --:--:-- --:--:-- --:--:-- 62.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2124 0 --:--:-- --:--:-- --:--:-- 2132 100 38.3M 100 38.3M 0 0 47.3M 0 --:--:-- --:--:-- --:--:-- 47.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 557 0 --:--:-- --:--:-- --:--:-- 558 0 0 0 620 0 0 1602 0 --:--:-- --:--:-- --:--:-- 1602 100 10.7M 100 10.7M 0 0 17.5M 0 --:--:-- --:--:-- --:--:-- 17.5M ~/nightlyrpmsGFejy/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmsGFejy/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmsGFejy/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmsGFejy ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmsGFejy/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmsGFejy/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 8e7ea633f5d1474c8f0e62466436f907 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.uowSHd:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5314059360980977861.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done f33d8522 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 149 | n22.crusty | 172.19.2.22 | crusty | 3596 | Deployed | f33d8522 | None | None | 7 | x86_64 | 1 | 2210 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed May 22 00:37:13 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 22 May 2019 00:37:13 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #175 In-Reply-To: <1407869471.780.1558399366605.JavaMail.jenkins@jenkins.ci.centos.org> References: <1407869471.780.1558399366605.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <520436857.926.1558485434124.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.06 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 22 May 2019 01:36:47 +0100 (0:00:00.127) 0:01:54.759 ********* TASK [container-engine/docker : check length of search domains] **************** Wednesday 22 May 2019 01:36:48 +0100 (0:00:00.135) 0:01:54.894 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 22 May 2019 01:36:48 +0100 (0:00:00.123) 0:01:55.018 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 22 May 2019 01:36:48 +0100 (0:00:00.117) 0:01:55.136 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 22 May 2019 01:36:48 +0100 (0:00:00.242) 0:01:55.378 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.625) 0:01:56.004 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.108) 0:01:56.113 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.110) 0:01:56.224 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.143) 0:01:56.367 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.149) 0:01:56.516 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.121) 0:01:56.638 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 22 May 2019 01:36:49 +0100 (0:00:00.124) 0:01:56.763 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.126) 0:01:56.890 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.124) 0:01:57.014 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.157) 0:01:57.172 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.145) 0:01:57.318 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.122) 0:01:57.440 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.122) 0:01:57.562 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 22 May 2019 01:36:50 +0100 (0:00:00.126) 0:01:57.689 ********* ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 22 May 2019 01:36:51 +0100 (0:00:01.044) 0:01:58.733 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 22 May 2019 01:36:52 +0100 (0:00:00.494) 0:01:59.228 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 22 May 2019 01:36:52 +0100 (0:00:00.120) 0:01:59.348 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 22 May 2019 01:36:52 +0100 (0:00:00.448) 0:01:59.796 ********* TASK [container-engine/docker : get systemd version] *************************** Wednesday 22 May 2019 01:36:53 +0100 (0:00:00.147) 0:01:59.944 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 22 May 2019 01:36:53 +0100 (0:00:00.141) 0:02:00.086 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 22 May 2019 01:36:53 +0100 (0:00:00.143) 0:02:00.229 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 22 May 2019 01:36:54 +0100 (0:00:00.961) 0:02:01.190 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 22 May 2019 01:36:55 +0100 (0:00:01.077) 0:02:02.268 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 22 May 2019 01:36:55 +0100 (0:00:00.145) 0:02:02.413 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 22 May 2019 01:36:55 +0100 (0:00:00.104) 0:02:02.518 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 22 May 2019 01:36:56 +0100 (0:00:00.417) 0:02:02.935 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 22 May 2019 01:36:56 +0100 (0:00:00.525) 0:02:03.461 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 22 May 2019 01:36:56 +0100 (0:00:00.122) 0:02:03.584 ********* changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 22 May 2019 01:36:59 +0100 (0:00:03.025) 0:02:06.610 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 22 May 2019 01:37:09 +0100 (0:00:10.076) 0:02:16.686 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 22 May 2019 01:37:10 +0100 (0:00:00.591) 0:02:17.277 ********* ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 22 May 2019 01:37:10 +0100 (0:00:00.558) 0:02:17.836 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 22 May 2019 01:37:11 +0100 (0:00:00.214) 0:02:18.051 ********* ok: [kube3] ok: [kube1] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 22 May 2019 01:37:11 +0100 (0:00:00.620) 0:02:18.671 ********* changed: [kube3] changed: [kube1] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 22 May 2019 01:37:12 +0100 (0:00:00.515) 0:02:19.186 ********* TASK [download : Download items] *********************************************** Wednesday 22 May 2019 01:37:12 +0100 (0:00:00.063) 0:02:19.250 ********* fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube3, kube1 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube3, kube1 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube3, kube1 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 22 May 2019 01:37:13 +0100 (0:00:01.350) 0:02:20.601 ********* =============================================================================== Install packages ------------------------------------------------------- 24.77s Wait for host to be available ------------------------------------------ 16.29s Extend root VG --------------------------------------------------------- 13.11s container-engine/docker : Docker | pause while Docker restarts --------- 10.08s gather facts from all instances ----------------------------------------- 9.95s container-engine/docker : Docker | reload docker ------------------------ 3.03s Persist loaded modules -------------------------------------------------- 2.73s kubernetes/preinstall : Create kubernetes directories ------------------- 1.80s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.66s Load required kernel modules -------------------------------------------- 1.56s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.35s download : Download items ----------------------------------------------- 1.35s Extend the root LV and FS to occupy remaining space --------------------- 1.33s Gathering Facts --------------------------------------------------------- 1.29s kubernetes/preinstall : Create cni directories -------------------------- 1.20s download : Download items ----------------------------------------------- 1.19s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.14s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.13s container-engine/docker : Write docker dns systemd drop-in -------------- 1.08s container-engine/docker : ensure service is started if docker packages are already present --- 1.04s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 22 01:14:34 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 22 May 2019 01:14:34 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #200 In-Reply-To: <2108185715.788.1558401791638.JavaMail.jenkins@jenkins.ci.centos.org> References: <2108185715.788.1558401791638.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <691245962.928.1558487674633.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.25 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 23 00:15:54 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 23 May 2019 00:15:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #372 In-Reply-To: <1059354321.923.1558484153205.JavaMail.jenkins@jenkins.ci.centos.org> References: <1059354321.923.1558484153205.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1304147425.1017.1558570554460.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 55 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1714 0 --:--:-- --:--:-- --:--:-- 1713 100 8513k 100 8513k 0 0 12.8M 0 --:--:-- --:--:-- --:--:-- 12.8M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1883 0 --:--:-- --:--:-- --:--:-- 1888 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 38.7M 0 --:--:-- --:--:-- --:--:-- 74.4M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 540 0 --:--:-- --:--:-- --:--:-- 540 0 0 0 620 0 0 1563 0 --:--:-- --:--:-- --:--:-- 1563 100 10.7M 100 10.7M 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M ~/nightlyrpmOb3yNo/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmOb3yNo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmOb3yNo/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmOb3yNo ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmOb3yNo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmOb3yNo/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f55388f045264340b2a1dd88ad7c4806 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.BrsQZQ:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins7593151863223015679.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0744d74c +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 118 | n54.pufty | 172.19.3.118 | pufty | 3600 | Deployed | 0744d74c | None | None | 7 | x86_64 | 1 | 2530 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu May 23 00:37:13 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 23 May 2019 00:37:13 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #176 In-Reply-To: <520436857.926.1558485434124.JavaMail.jenkins@jenkins.ci.centos.org> References: <520436857.926.1558485434124.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <480599260.1020.1558571833422.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.08 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 23 May 2019 01:36:47 +0100 (0:00:00.125) 0:01:44.382 ********** TASK [container-engine/docker : check length of search domains] **************** Thursday 23 May 2019 01:36:47 +0100 (0:00:00.131) 0:01:44.514 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 23 May 2019 01:36:47 +0100 (0:00:00.139) 0:01:44.653 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 23 May 2019 01:36:47 +0100 (0:00:00.127) 0:01:44.781 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 23 May 2019 01:36:48 +0100 (0:00:00.243) 0:01:45.025 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 23 May 2019 01:36:48 +0100 (0:00:00.622) 0:01:45.647 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 23 May 2019 01:36:48 +0100 (0:00:00.109) 0:01:45.757 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 23 May 2019 01:36:48 +0100 (0:00:00.110) 0:01:45.867 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 23 May 2019 01:36:49 +0100 (0:00:00.133) 0:01:46.001 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 23 May 2019 01:36:49 +0100 (0:00:00.143) 0:01:46.144 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 23 May 2019 01:36:49 +0100 (0:00:00.117) 0:01:46.262 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 23 May 2019 01:36:49 +0100 (0:00:00.124) 0:01:46.387 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 23 May 2019 01:36:49 +0100 (0:00:00.119) 0:01:46.506 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 23 May 2019 01:36:49 +0100 (0:00:00.122) 0:01:46.628 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 23 May 2019 01:36:49 +0100 (0:00:00.159) 0:01:46.788 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 23 May 2019 01:36:50 +0100 (0:00:00.153) 0:01:46.941 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 23 May 2019 01:36:50 +0100 (0:00:00.121) 0:01:47.062 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 23 May 2019 01:36:50 +0100 (0:00:00.122) 0:01:47.185 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 23 May 2019 01:36:50 +0100 (0:00:00.124) 0:01:47.309 ********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 23 May 2019 01:36:51 +0100 (0:00:00.879) 0:01:48.189 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 23 May 2019 01:36:51 +0100 (0:00:00.613) 0:01:48.802 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 23 May 2019 01:36:52 +0100 (0:00:00.122) 0:01:48.924 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 23 May 2019 01:36:52 +0100 (0:00:00.549) 0:01:49.474 ********** TASK [container-engine/docker : get systemd version] *************************** Thursday 23 May 2019 01:36:52 +0100 (0:00:00.142) 0:01:49.617 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 23 May 2019 01:36:52 +0100 (0:00:00.138) 0:01:49.755 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 23 May 2019 01:36:52 +0100 (0:00:00.132) 0:01:49.888 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 23 May 2019 01:36:53 +0100 (0:00:00.988) 0:01:50.877 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 23 May 2019 01:36:54 +0100 (0:00:00.939) 0:01:51.817 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 23 May 2019 01:36:55 +0100 (0:00:00.143) 0:01:51.960 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 23 May 2019 01:36:55 +0100 (0:00:00.117) 0:01:52.077 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 23 May 2019 01:36:55 +0100 (0:00:00.424) 0:01:52.502 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 23 May 2019 01:36:56 +0100 (0:00:00.517) 0:01:53.019 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 23 May 2019 01:36:56 +0100 (0:00:00.124) 0:01:53.144 ********** changed: [kube2] changed: [kube1] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 23 May 2019 01:36:59 +0100 (0:00:03.003) 0:01:56.148 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube2] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 23 May 2019 01:37:09 +0100 (0:00:10.092) 0:02:06.240 ********** changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 23 May 2019 01:37:09 +0100 (0:00:00.632) 0:02:06.873 ********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 23 May 2019 01:37:10 +0100 (0:00:00.561) 0:02:07.435 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 23 May 2019 01:37:10 +0100 (0:00:00.214) 0:02:07.649 ********** ok: [kube2] ok: [kube1] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 23 May 2019 01:37:11 +0100 (0:00:00.561) 0:02:08.210 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 23 May 2019 01:37:11 +0100 (0:00:00.455) 0:02:08.665 ********** TASK [download : Download items] *********************************************** Thursday 23 May 2019 01:37:11 +0100 (0:00:00.075) 0:02:08.741 ********** fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=108 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=97 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 23 May 2019 01:37:13 +0100 (0:00:01.332) 0:02:10.074 ********** =============================================================================== Install packages ------------------------------------------------------- 23.99s Wait for host to be available ------------------------------------------ 16.20s container-engine/docker : Docker | pause while Docker restarts --------- 10.09s gather facts from all instances ----------------------------------------- 9.95s Persist loaded modules -------------------------------------------------- 3.19s container-engine/docker : Docker | reload docker ------------------------ 3.00s kubernetes/preinstall : Create kubernetes directories ------------------- 1.96s Load required kernel modules -------------------------------------------- 1.69s kubernetes/preinstall : Enable ip forwarding ---------------------------- 1.61s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.53s Extend root VG ---------------------------------------------------------- 1.53s Extend the root LV and FS to occupy remaining space --------------------- 1.43s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.38s download : Download items ----------------------------------------------- 1.33s download : Download items ----------------------------------------------- 1.21s kubernetes/preinstall : Create cni directories -------------------------- 1.19s Gathering Facts --------------------------------------------------------- 1.16s download : Sync container ----------------------------------------------- 1.16s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.14s kubernetes/preinstall : Hosts | Update (if necessary) hosts file -------- 1.09s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 23 01:22:43 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 23 May 2019 01:22:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #201 In-Reply-To: <691245962.928.1558487674633.JavaMail.jenkins@jenkins.ci.centos.org> References: <691245962.928.1558487674633.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1756778965.1025.1558574563423.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.21 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 24 00:15:59 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 24 May 2019 00:15:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #373 In-Reply-To: <1304147425.1017.1558570554460.JavaMail.jenkins@jenkins.ci.centos.org> References: <1304147425.1017.1558570554460.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <622420687.1132.1558656959395.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.41 KB...] Total 69 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1783 0 --:--:-- --:--:-- --:--:-- 1784 100 8513k 100 8513k 0 0 14.3M 0 --:--:-- --:--:-- --:--:-- 14.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2102 0 --:--:-- --:--:-- --:--:-- 2111 8 38.3M 8 3276k 0 0 6168k 0 0:00:06 --:--:-- 0:00:06 6168k100 38.3M 100 38.3M 0 0 36.6M 0 0:00:01 0:00:01 --:--:-- 68.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 518 0 --:--:-- --:--:-- --:--:-- 518 0 0 0 620 0 0 1571 0 --:--:-- --:--:-- --:--:-- 1571 100 10.7M 100 10.7M 0 0 14.6M 0 --:--:-- --:--:-- --:--:-- 14.6M ~/nightlyrpmg41103/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmg41103/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmg41103/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmg41103 ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmg41103/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmg41103/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b2508ffc5d0b4695bb13009a87314853 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Iz1IBT:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2585378817832233914.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 1f32070b +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 175 | n48.crusty | 172.19.2.48 | crusty | 3604 | Deployed | 1f32070b | None | None | 7 | x86_64 | 1 | 2470 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri May 24 00:41:20 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 24 May 2019 00:41:20 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #177 In-Reply-To: <480599260.1020.1558571833422.JavaMail.jenkins@jenkins.ci.centos.org> References: <480599260.1020.1558571833422.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <632977634.1149.1558658480763.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.20 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 24 May 2019 01:40:38 +0100 (0:00:00.302) 0:03:01.429 ************ TASK [container-engine/docker : check length of search domains] **************** Friday 24 May 2019 01:40:38 +0100 (0:00:00.289) 0:03:01.719 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Friday 24 May 2019 01:40:38 +0100 (0:00:00.287) 0:03:02.007 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 24 May 2019 01:40:39 +0100 (0:00:00.296) 0:03:02.303 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 24 May 2019 01:40:39 +0100 (0:00:00.597) 0:03:02.901 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 24 May 2019 01:40:41 +0100 (0:00:01.383) 0:03:04.284 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 24 May 2019 01:40:41 +0100 (0:00:00.270) 0:03:04.555 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 24 May 2019 01:40:41 +0100 (0:00:00.252) 0:03:04.808 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 24 May 2019 01:40:41 +0100 (0:00:00.304) 0:03:05.113 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 24 May 2019 01:40:42 +0100 (0:00:00.303) 0:03:05.416 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 24 May 2019 01:40:42 +0100 (0:00:00.286) 0:03:05.703 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 24 May 2019 01:40:42 +0100 (0:00:00.278) 0:03:05.981 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 24 May 2019 01:40:43 +0100 (0:00:00.286) 0:03:06.268 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 24 May 2019 01:40:43 +0100 (0:00:00.282) 0:03:06.550 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 24 May 2019 01:40:43 +0100 (0:00:00.361) 0:03:06.911 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 24 May 2019 01:40:44 +0100 (0:00:00.350) 0:03:07.262 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 24 May 2019 01:40:44 +0100 (0:00:00.286) 0:03:07.548 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 24 May 2019 01:40:44 +0100 (0:00:00.292) 0:03:07.840 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 24 May 2019 01:40:45 +0100 (0:00:00.294) 0:03:08.134 ************ ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 24 May 2019 01:40:46 +0100 (0:00:01.961) 0:03:10.096 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 24 May 2019 01:40:48 +0100 (0:00:01.185) 0:03:11.281 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 24 May 2019 01:40:48 +0100 (0:00:00.329) 0:03:11.611 ************ changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 24 May 2019 01:40:49 +0100 (0:00:01.026) 0:03:12.638 ************ TASK [container-engine/docker : get systemd version] *************************** Friday 24 May 2019 01:40:49 +0100 (0:00:00.397) 0:03:13.036 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 24 May 2019 01:40:50 +0100 (0:00:00.300) 0:03:13.336 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 24 May 2019 01:40:50 +0100 (0:00:00.305) 0:03:13.642 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 24 May 2019 01:40:52 +0100 (0:00:02.133) 0:03:15.775 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 24 May 2019 01:40:54 +0100 (0:00:02.116) 0:03:17.892 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 24 May 2019 01:40:55 +0100 (0:00:00.333) 0:03:18.226 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 24 May 2019 01:40:55 +0100 (0:00:00.232) 0:03:18.458 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 24 May 2019 01:40:56 +0100 (0:00:00.982) 0:03:19.441 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 24 May 2019 01:40:57 +0100 (0:00:01.128) 0:03:20.570 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 24 May 2019 01:40:57 +0100 (0:00:00.284) 0:03:20.854 ************ changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 24 May 2019 01:41:01 +0100 (0:00:04.160) 0:03:25.015 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 24 May 2019 01:41:12 +0100 (0:00:10.230) 0:03:35.246 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 24 May 2019 01:41:13 +0100 (0:00:01.276) 0:03:36.522 ************ ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 24 May 2019 01:41:14 +0100 (0:00:01.441) 0:03:37.964 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 24 May 2019 01:41:15 +0100 (0:00:00.490) 0:03:38.454 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 24 May 2019 01:41:16 +0100 (0:00:01.167) 0:03:39.622 ************ changed: [kube1] changed: [kube3] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 24 May 2019 01:41:17 +0100 (0:00:00.943) 0:03:40.566 ************ TASK [download : Download items] *********************************************** Friday 24 May 2019 01:41:17 +0100 (0:00:00.118) 0:03:40.685 ************ fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube2, kube1, kube3 fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 24 May 2019 01:41:20 +0100 (0:00:02.762) 0:03:43.448 ************ =============================================================================== Install packages ------------------------------------------------------- 32.10s Wait for host to be available ------------------------------------------ 24.21s gather facts from all instances ---------------------------------------- 17.31s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 5.65s container-engine/docker : Docker | reload docker ------------------------ 4.16s kubernetes/preinstall : Create kubernetes directories ------------------- 3.98s download : Download items ----------------------------------------------- 2.76s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.76s kubernetes/preinstall : Create cni directories -------------------------- 2.69s Load required kernel modules -------------------------------------------- 2.65s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.55s Extend root VG ---------------------------------------------------------- 2.41s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.24s kubernetes/preinstall : Enable ip forwarding ---------------------------- 2.24s download : Sync container ----------------------------------------------- 2.17s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.16s container-engine/docker : Write docker options systemd drop-in ---------- 2.13s container-engine/docker : Write docker dns systemd drop-in -------------- 2.12s bootstrap-os : Disable fastestmirror plugin ----------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 24 01:22:52 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 24 May 2019 01:22:52 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #202 In-Reply-To: <1756778965.1025.1558574563423.JavaMail.jenkins@jenkins.ci.centos.org> References: <1756778965.1025.1558574563423.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <943341987.1174.1558660972049.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.25 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 25 00:15:55 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 25 May 2019 00:15:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #374 In-Reply-To: <622420687.1132.1558656959395.JavaMail.jenkins@jenkins.ci.centos.org> References: <622420687.1132.1558656959395.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1522287951.1360.1558743355051.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.43 KB...] Total 61 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1596 0 --:--:-- --:--:-- --:--:-- 1604 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 11.7M 0 --:--:-- --:--:-- --:--:-- 31.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1970 0 --:--:-- --:--:-- --:--:-- 1977 61 38.3M 61 23.4M 0 0 33.7M 0 0:00:01 --:--:-- 0:00:01 33.7M100 38.3M 100 38.3M 0 0 45.1M 0 --:--:-- --:--:-- --:--:-- 97.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 570 0 --:--:-- --:--:-- --:--:-- 573 0 0 0 620 0 0 1540 0 --:--:-- --:--:-- --:--:-- 1540 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 38.1M ~/nightlyrpmdmvqGl/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmdmvqGl/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmdmvqGl/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmdmvqGl ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmdmvqGl/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmdmvqGl/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M 9189a547c946479491d5652db33bc064 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Q_cIYM:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3031628563158878331.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 8659981b +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 110 | n46.pufty | 172.19.3.110 | pufty | 3609 | Deployed | 8659981b | None | None | 7 | x86_64 | 1 | 2450 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sat May 25 00:41:43 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 25 May 2019 00:41:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #178 In-Reply-To: <632977634.1149.1558658480763.JavaMail.jenkins@jenkins.ci.centos.org> References: <632977634.1149.1558658480763.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2125451787.1362.1558744903524.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.09 KB...] TASK [container-engine/docker : check number of search domains] **************** Saturday 25 May 2019 01:41:01 +0100 (0:00:00.290) 0:03:02.699 ********** TASK [container-engine/docker : check length of search domains] **************** Saturday 25 May 2019 01:41:01 +0100 (0:00:00.292) 0:03:02.992 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Saturday 25 May 2019 01:41:01 +0100 (0:00:00.303) 0:03:03.296 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Saturday 25 May 2019 01:41:02 +0100 (0:00:00.284) 0:03:03.580 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Saturday 25 May 2019 01:41:02 +0100 (0:00:00.583) 0:03:04.164 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Saturday 25 May 2019 01:41:04 +0100 (0:00:01.335) 0:03:05.500 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Saturday 25 May 2019 01:41:04 +0100 (0:00:00.255) 0:03:05.756 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Saturday 25 May 2019 01:41:04 +0100 (0:00:00.251) 0:03:06.007 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Saturday 25 May 2019 01:41:04 +0100 (0:00:00.298) 0:03:06.306 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Saturday 25 May 2019 01:41:05 +0100 (0:00:00.297) 0:03:06.603 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Saturday 25 May 2019 01:41:05 +0100 (0:00:00.281) 0:03:06.885 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Saturday 25 May 2019 01:41:05 +0100 (0:00:00.285) 0:03:07.171 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Saturday 25 May 2019 01:41:06 +0100 (0:00:00.275) 0:03:07.446 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Saturday 25 May 2019 01:41:06 +0100 (0:00:00.283) 0:03:07.730 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Saturday 25 May 2019 01:41:06 +0100 (0:00:00.371) 0:03:08.102 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Saturday 25 May 2019 01:41:07 +0100 (0:00:00.348) 0:03:08.450 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Saturday 25 May 2019 01:41:07 +0100 (0:00:00.281) 0:03:08.731 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Saturday 25 May 2019 01:41:07 +0100 (0:00:00.290) 0:03:09.022 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Saturday 25 May 2019 01:41:07 +0100 (0:00:00.282) 0:03:09.304 ********** ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Saturday 25 May 2019 01:41:09 +0100 (0:00:01.958) 0:03:11.263 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Saturday 25 May 2019 01:41:10 +0100 (0:00:01.084) 0:03:12.347 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Saturday 25 May 2019 01:41:11 +0100 (0:00:00.276) 0:03:12.624 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Saturday 25 May 2019 01:41:12 +0100 (0:00:00.949) 0:03:13.573 ********** TASK [container-engine/docker : get systemd version] *************************** Saturday 25 May 2019 01:41:12 +0100 (0:00:00.306) 0:03:13.879 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Saturday 25 May 2019 01:41:12 +0100 (0:00:00.299) 0:03:14.179 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Saturday 25 May 2019 01:41:13 +0100 (0:00:00.306) 0:03:14.485 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Saturday 25 May 2019 01:41:15 +0100 (0:00:01.976) 0:03:16.462 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Saturday 25 May 2019 01:41:17 +0100 (0:00:02.111) 0:03:18.574 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Saturday 25 May 2019 01:41:17 +0100 (0:00:00.401) 0:03:18.975 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Saturday 25 May 2019 01:41:17 +0100 (0:00:00.267) 0:03:19.242 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Saturday 25 May 2019 01:41:18 +0100 (0:00:01.039) 0:03:20.281 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Saturday 25 May 2019 01:41:20 +0100 (0:00:01.293) 0:03:21.575 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Saturday 25 May 2019 01:41:20 +0100 (0:00:00.333) 0:03:21.908 ********** changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Saturday 25 May 2019 01:41:24 +0100 (0:00:04.220) 0:03:26.129 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Saturday 25 May 2019 01:41:34 +0100 (0:00:10.198) 0:03:36.327 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Saturday 25 May 2019 01:41:36 +0100 (0:00:01.317) 0:03:37.645 ********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Saturday 25 May 2019 01:41:37 +0100 (0:00:01.283) 0:03:38.928 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Saturday 25 May 2019 01:41:38 +0100 (0:00:00.509) 0:03:39.438 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Saturday 25 May 2019 01:41:39 +0100 (0:00:01.221) 0:03:40.659 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Saturday 25 May 2019 01:41:40 +0100 (0:00:01.042) 0:03:41.702 ********** TASK [download : Download items] *********************************************** Saturday 25 May 2019 01:41:40 +0100 (0:00:00.127) 0:03:41.829 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Saturday 25 May 2019 01:41:43 +0100 (0:00:02.674) 0:03:44.503 ********** =============================================================================== Install packages ------------------------------------------------------- 34.93s Wait for host to be available ------------------------------------------ 23.92s gather facts from all instances ---------------------------------------- 17.81s container-engine/docker : Docker | pause while Docker restarts --------- 10.20s Persist loaded modules -------------------------------------------------- 5.87s container-engine/docker : Docker | reload docker ------------------------ 4.22s kubernetes/preinstall : Create kubernetes directories ------------------- 4.09s download : Download items ----------------------------------------------- 2.67s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.63s Load required kernel modules -------------------------------------------- 2.59s Extend root VG ---------------------------------------------------------- 2.51s kubernetes/preinstall : Create cni directories -------------------------- 2.48s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.42s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.16s container-engine/docker : Write docker dns systemd drop-in -------------- 2.11s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.08s bootstrap-os : Disable fastestmirror plugin ----------------------------- 2.07s download : Sync container ----------------------------------------------- 2.06s Extend the root LV and FS to occupy remaining space --------------------- 2.02s download : Download items ----------------------------------------------- 1.98s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sat May 25 01:14:19 2019 From: ci at centos.org (ci at centos.org) Date: Sat, 25 May 2019 01:14:19 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #203 In-Reply-To: <943341987.1174.1558660972049.JavaMail.jenkins@jenkins.ci.centos.org> References: <943341987.1174.1558660972049.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <520513003.1366.1558746859246.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.26 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 26 00:13:55 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 26 May 2019 00:13:55 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #375 In-Reply-To: <1522287951.1360.1558743355051.JavaMail.jenkins@jenkins.ci.centos.org> References: <1522287951.1360.1558743355051.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <750160839.1412.1558829635667.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 90 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2617 0 --:--:-- --:--:-- --:--:-- 2630 100 8513k 100 8513k 0 0 17.4M 0 --:--:-- --:--:-- --:--:-- 17.4M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2680 0 --:--:-- --:--:-- --:--:-- 2690 0 38.3M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 38.3M 100 38.3M 0 0 46.9M 0 --:--:-- --:--:-- --:--:-- 85.3M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 957 0 --:--:-- --:--:-- --:--:-- 962 0 0 0 620 0 0 2597 0 --:--:-- --:--:-- --:--:-- 2597 100 10.7M 100 10.7M 0 0 21.4M 0 --:--:-- --:--:-- --:--:-- 21.4M ~/nightlyrpm3gTaWB/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm3gTaWB/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm3gTaWB/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm3gTaWB ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm3gTaWB/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm3gTaWB/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 31 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f2ce750564b54a0f8ed03d8ad7b506d5 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.49_pqS:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins5151339384005897785.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 038624fd +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 249 | n58.dusty | 172.19.2.122 | dusty | 3615 | Deployed | 038624fd | None | None | 7 | x86_64 | 1 | 2570 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Sun May 26 00:37:12 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 26 May 2019 00:37:12 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #179 In-Reply-To: <2125451787.1362.1558744903524.JavaMail.jenkins@jenkins.ci.centos.org> References: <2125451787.1362.1558744903524.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <807157951.1413.1558831032260.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.06 KB...] TASK [container-engine/docker : check number of search domains] **************** Sunday 26 May 2019 01:36:46 +0100 (0:00:00.129) 0:01:55.986 ************ TASK [container-engine/docker : check length of search domains] **************** Sunday 26 May 2019 01:36:46 +0100 (0:00:00.125) 0:01:56.112 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Sunday 26 May 2019 01:36:46 +0100 (0:00:00.124) 0:01:56.236 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Sunday 26 May 2019 01:36:46 +0100 (0:00:00.123) 0:01:56.360 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Sunday 26 May 2019 01:36:46 +0100 (0:00:00.249) 0:01:56.609 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Sunday 26 May 2019 01:36:47 +0100 (0:00:00.622) 0:01:57.232 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Sunday 26 May 2019 01:36:47 +0100 (0:00:00.114) 0:01:57.346 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Sunday 26 May 2019 01:36:47 +0100 (0:00:00.108) 0:01:57.454 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Sunday 26 May 2019 01:36:47 +0100 (0:00:00.135) 0:01:57.589 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Sunday 26 May 2019 01:36:48 +0100 (0:00:00.140) 0:01:57.730 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Sunday 26 May 2019 01:36:48 +0100 (0:00:00.124) 0:01:57.854 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Sunday 26 May 2019 01:36:48 +0100 (0:00:00.121) 0:01:57.976 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Sunday 26 May 2019 01:36:48 +0100 (0:00:00.122) 0:01:58.098 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Sunday 26 May 2019 01:36:48 +0100 (0:00:00.122) 0:01:58.221 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Sunday 26 May 2019 01:36:48 +0100 (0:00:00.157) 0:01:58.379 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Sunday 26 May 2019 01:36:48 +0100 (0:00:00.146) 0:01:58.525 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Sunday 26 May 2019 01:36:48 +0100 (0:00:00.121) 0:01:58.647 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Sunday 26 May 2019 01:36:49 +0100 (0:00:00.124) 0:01:58.771 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Sunday 26 May 2019 01:36:49 +0100 (0:00:00.126) 0:01:58.898 ************ ok: [kube3] ok: [kube2] ok: [kube1] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Sunday 26 May 2019 01:36:50 +0100 (0:00:00.884) 0:01:59.782 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Sunday 26 May 2019 01:36:50 +0100 (0:00:00.521) 0:02:00.304 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Sunday 26 May 2019 01:36:50 +0100 (0:00:00.125) 0:02:00.429 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Sunday 26 May 2019 01:36:51 +0100 (0:00:00.554) 0:02:00.983 ************ TASK [container-engine/docker : get systemd version] *************************** Sunday 26 May 2019 01:36:51 +0100 (0:00:00.148) 0:02:01.131 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Sunday 26 May 2019 01:36:51 +0100 (0:00:00.145) 0:02:01.277 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Sunday 26 May 2019 01:36:51 +0100 (0:00:00.147) 0:02:01.424 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Sunday 26 May 2019 01:36:52 +0100 (0:00:01.021) 0:02:02.445 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Sunday 26 May 2019 01:36:53 +0100 (0:00:00.998) 0:02:03.444 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Sunday 26 May 2019 01:36:53 +0100 (0:00:00.144) 0:02:03.589 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Sunday 26 May 2019 01:36:53 +0100 (0:00:00.104) 0:02:03.693 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Sunday 26 May 2019 01:36:54 +0100 (0:00:00.532) 0:02:04.226 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Sunday 26 May 2019 01:36:55 +0100 (0:00:00.522) 0:02:04.748 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Sunday 26 May 2019 01:36:55 +0100 (0:00:00.125) 0:02:04.873 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Sunday 26 May 2019 01:36:58 +0100 (0:00:03.056) 0:02:07.930 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Sunday 26 May 2019 01:37:08 +0100 (0:00:10.078) 0:02:18.009 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Sunday 26 May 2019 01:37:08 +0100 (0:00:00.526) 0:02:18.535 ************ ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Sunday 26 May 2019 01:37:09 +0100 (0:00:00.610) 0:02:19.146 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Sunday 26 May 2019 01:37:09 +0100 (0:00:00.215) 0:02:19.362 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Sunday 26 May 2019 01:37:10 +0100 (0:00:00.524) 0:02:19.886 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Sunday 26 May 2019 01:37:10 +0100 (0:00:00.438) 0:02:20.324 ************ TASK [download : Download items] *********************************************** Sunday 26 May 2019 01:37:10 +0100 (0:00:00.046) 0:02:20.371 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube3, kube2 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Sunday 26 May 2019 01:37:12 +0100 (0:00:01.340) 0:02:21.712 ************ =============================================================================== Install packages ------------------------------------------------------- 25.37s Wait for host to be available ------------------------------------------ 16.22s Extend root VG --------------------------------------------------------- 13.66s gather facts from all instances ---------------------------------------- 10.34s container-engine/docker : Docker | pause while Docker restarts --------- 10.08s container-engine/docker : Docker | reload docker ------------------------ 3.06s Persist loaded modules -------------------------------------------------- 3.03s kubernetes/preinstall : Create kubernetes directories ------------------- 1.87s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.55s Extend the root LV and FS to occupy remaining space --------------------- 1.46s Load required kernel modules -------------------------------------------- 1.42s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.41s download : Download items ----------------------------------------------- 1.34s Gathering Facts --------------------------------------------------------- 1.27s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.15s download : Download items ----------------------------------------------- 1.13s kubernetes/preinstall : Create cni directories -------------------------- 1.12s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.10s download : Sync container ----------------------------------------------- 1.04s container-engine/docker : Write docker options systemd drop-in ---------- 1.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Sun May 26 01:16:51 2019 From: ci at centos.org (ci at centos.org) Date: Sun, 26 May 2019 01:16:51 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #204 In-Reply-To: <520513003.1366.1558746859246.JavaMail.jenkins@jenkins.ci.centos.org> References: <520513003.1366.1558746859246.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <228037716.1415.1558833411181.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.22 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 27 00:14:09 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 27 May 2019 00:14:09 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #376 In-Reply-To: <750160839.1412.1558829635667.JavaMail.jenkins@jenkins.ci.centos.org> References: <750160839.1412.1558829635667.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1068084230.1459.1558916049742.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.38 KB...] Total 99 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2511 0 --:--:-- --:--:-- --:--:-- 2520 100 8513k 100 8513k 0 0 13.7M 0 --:--:-- --:--:-- --:--:-- 13.7M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2840 0 --:--:-- --:--:-- --:--:-- 2837 100 38.3M 100 38.3M 0 0 45.0M 0 --:--:-- --:--:-- --:--:-- 45.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 965 0 --:--:-- --:--:-- --:--:-- 968 0 0 0 620 0 0 2227 0 --:--:-- --:--:-- --:--:-- 2227 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 18.5M 0 --:--:-- --:--:-- --:--:-- 73.0M ~/nightlyrpm8rvJ83/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpm8rvJ83/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpm8rvJ83/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpm8rvJ83 ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpm8rvJ83/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpm8rvJ83/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M a50ded82f79c4674a8318b3a0b434f83 -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.FKJ2Mw:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins1077480071550488233.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 0ebe905f +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 222 | n31.dusty | 172.19.2.95 | dusty | 3620 | Deployed | 0ebe905f | None | None | 7 | x86_64 | 1 | 2300 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Mon May 27 00:41:29 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 27 May 2019 00:41:29 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #180 In-Reply-To: <807157951.1413.1558831032260.JavaMail.jenkins@jenkins.ci.centos.org> References: <807157951.1413.1558831032260.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1187627283.1460.1558917689195.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.13 KB...] TASK [container-engine/docker : check number of search domains] **************** Monday 27 May 2019 01:40:46 +0100 (0:00:00.350) 0:02:58.823 ************ TASK [container-engine/docker : check length of search domains] **************** Monday 27 May 2019 01:40:46 +0100 (0:00:00.307) 0:02:59.131 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Monday 27 May 2019 01:40:46 +0100 (0:00:00.303) 0:02:59.434 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Monday 27 May 2019 01:40:47 +0100 (0:00:00.283) 0:02:59.718 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Monday 27 May 2019 01:40:47 +0100 (0:00:00.652) 0:03:00.371 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Monday 27 May 2019 01:40:49 +0100 (0:00:01.290) 0:03:01.661 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Monday 27 May 2019 01:40:49 +0100 (0:00:00.252) 0:03:01.914 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Monday 27 May 2019 01:40:49 +0100 (0:00:00.251) 0:03:02.165 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Monday 27 May 2019 01:40:49 +0100 (0:00:00.305) 0:03:02.470 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Monday 27 May 2019 01:40:50 +0100 (0:00:00.296) 0:03:02.767 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Monday 27 May 2019 01:40:50 +0100 (0:00:00.284) 0:03:03.051 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Monday 27 May 2019 01:40:50 +0100 (0:00:00.282) 0:03:03.334 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Monday 27 May 2019 01:40:51 +0100 (0:00:00.281) 0:03:03.616 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Monday 27 May 2019 01:40:51 +0100 (0:00:00.281) 0:03:03.898 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Monday 27 May 2019 01:40:51 +0100 (0:00:00.352) 0:03:04.250 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Monday 27 May 2019 01:40:52 +0100 (0:00:00.351) 0:03:04.602 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Monday 27 May 2019 01:40:52 +0100 (0:00:00.301) 0:03:04.903 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Monday 27 May 2019 01:40:52 +0100 (0:00:00.288) 0:03:05.192 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Monday 27 May 2019 01:40:52 +0100 (0:00:00.282) 0:03:05.474 ************ ok: [kube1] ok: [kube3] ok: [kube2] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Monday 27 May 2019 01:40:54 +0100 (0:00:01.919) 0:03:07.393 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Monday 27 May 2019 01:40:55 +0100 (0:00:01.150) 0:03:08.543 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Monday 27 May 2019 01:40:56 +0100 (0:00:00.351) 0:03:08.895 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Monday 27 May 2019 01:40:57 +0100 (0:00:01.210) 0:03:10.106 ************ TASK [container-engine/docker : get systemd version] *************************** Monday 27 May 2019 01:40:57 +0100 (0:00:00.314) 0:03:10.420 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Monday 27 May 2019 01:40:58 +0100 (0:00:00.353) 0:03:10.774 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Monday 27 May 2019 01:40:58 +0100 (0:00:00.352) 0:03:11.126 ************ changed: [kube2] changed: [kube1] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Monday 27 May 2019 01:41:00 +0100 (0:00:02.031) 0:03:13.158 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Monday 27 May 2019 01:41:02 +0100 (0:00:02.194) 0:03:15.352 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Monday 27 May 2019 01:41:03 +0100 (0:00:00.380) 0:03:15.733 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Monday 27 May 2019 01:41:03 +0100 (0:00:00.280) 0:03:16.014 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Monday 27 May 2019 01:41:04 +0100 (0:00:01.002) 0:03:17.016 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Monday 27 May 2019 01:41:05 +0100 (0:00:01.233) 0:03:18.250 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Monday 27 May 2019 01:41:06 +0100 (0:00:00.355) 0:03:18.605 ************ changed: [kube1] changed: [kube3] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Monday 27 May 2019 01:41:10 +0100 (0:00:04.214) 0:03:22.819 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Monday 27 May 2019 01:41:20 +0100 (0:00:10.231) 0:03:33.051 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Monday 27 May 2019 01:41:21 +0100 (0:00:01.238) 0:03:34.289 ************ ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Monday 27 May 2019 01:41:23 +0100 (0:00:01.355) 0:03:35.645 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Monday 27 May 2019 01:41:23 +0100 (0:00:00.508) 0:03:36.153 ************ ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Monday 27 May 2019 01:41:24 +0100 (0:00:01.306) 0:03:37.460 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Monday 27 May 2019 01:41:25 +0100 (0:00:01.002) 0:03:38.463 ************ TASK [download : Download items] *********************************************** Monday 27 May 2019 01:41:26 +0100 (0:00:00.127) 0:03:38.591 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Monday 27 May 2019 01:41:28 +0100 (0:00:02.779) 0:03:41.370 ************ =============================================================================== Install packages ------------------------------------------------------- 32.78s Wait for host to be available ------------------------------------------ 23.87s gather facts from all instances ---------------------------------------- 16.86s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 5.89s container-engine/docker : Docker | reload docker ------------------------ 4.21s kubernetes/preinstall : Create kubernetes directories ------------------- 4.00s download : Download items ----------------------------------------------- 2.78s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.73s Load required kernel modules -------------------------------------------- 2.62s kubernetes/preinstall : Create cni directories -------------------------- 2.50s Extend root VG ---------------------------------------------------------- 2.50s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.42s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.22s container-engine/docker : Write docker dns systemd drop-in -------------- 2.19s bootstrap-os : Disable fastestmirror plugin ----------------------------- 2.08s Gathering Facts --------------------------------------------------------- 2.07s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.05s container-engine/docker : Write docker options systemd drop-in ---------- 2.03s download : Sync container ----------------------------------------------- 2.02s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Mon May 27 01:22:54 2019 From: ci at centos.org (ci at centos.org) Date: Mon, 27 May 2019 01:22:54 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #205 In-Reply-To: <228037716.1415.1558833411181.JavaMail.jenkins@jenkins.ci.centos.org> References: <228037716.1415.1558833411181.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <123395865.1468.1558920174661.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 55.21 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 28 00:13:47 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 28 May 2019 00:13:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #377 In-Reply-To: <1068084230.1459.1558916049742.JavaMail.jenkins@jenkins.ci.centos.org> References: <1068084230.1459.1558916049742.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <399011792.1518.1559002427055.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 97 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2323 0 --:--:-- --:--:-- --:--:-- 2335 100 8513k 100 8513k 0 0 15.0M 0 --:--:-- --:--:-- --:--:-- 15.0M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 2487 0 --:--:-- --:--:-- --:--:-- 2488 100 38.3M 100 38.3M 0 0 47.1M 0 --:--:-- --:--:-- --:--:-- 47.1M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 983 0 --:--:-- --:--:-- --:--:-- 987 0 0 0 620 0 0 2611 0 --:--:-- --:--:-- --:--:-- 2611 100 10.7M 100 10.7M 0 0 21.6M 0 --:--:-- --:--:-- --:--:-- 21.6M ~/nightlyrpmKYQfqc/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmKYQfqc/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmKYQfqc/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmKYQfqc ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmKYQfqc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmKYQfqc/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M b603d8c275d94c82a8040f6c5367e68b -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Dk6Ij4:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins3881174403682108580.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done f584a461 +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 192 | n1.dusty | 172.19.2.65 | dusty | 3626 | Deployed | f584a461 | None | None | 7 | x86_64 | 1 | 2000 | None | +---------+----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Tue May 28 00:37:03 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 28 May 2019 00:37:03 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #181 In-Reply-To: <1187627283.1460.1558917689195.JavaMail.jenkins@jenkins.ci.centos.org> References: <1187627283.1460.1558917689195.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1772888283.1519.1559003823770.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.05 KB...] TASK [container-engine/docker : check number of search domains] **************** Tuesday 28 May 2019 01:36:37 +0100 (0:00:00.127) 0:01:53.840 *********** TASK [container-engine/docker : check length of search domains] **************** Tuesday 28 May 2019 01:36:38 +0100 (0:00:00.126) 0:01:53.967 *********** TASK [container-engine/docker : check for minimum kernel version] ************** Tuesday 28 May 2019 01:36:38 +0100 (0:00:00.132) 0:01:54.099 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Tuesday 28 May 2019 01:36:38 +0100 (0:00:00.127) 0:01:54.226 *********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Tuesday 28 May 2019 01:36:38 +0100 (0:00:00.310) 0:01:54.536 *********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.623) 0:01:55.160 *********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.108) 0:01:55.268 *********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.107) 0:01:55.376 *********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.132) 0:01:55.508 *********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.133) 0:01:55.641 *********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.124) 0:01:55.766 *********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Tuesday 28 May 2019 01:36:39 +0100 (0:00:00.123) 0:01:55.889 *********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.121) 0:01:56.010 *********** TASK [container-engine/docker : ensure docker packages are installed] ********** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.120) 0:01:56.131 *********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.160) 0:01:56.291 *********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.152) 0:01:56.444 *********** TASK [container-engine/docker : show available packages on ubuntu] ************* Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.120) 0:01:56.565 *********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.125) 0:01:56.690 *********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Tuesday 28 May 2019 01:36:40 +0100 (0:00:00.122) 0:01:56.813 *********** ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Tuesday 28 May 2019 01:36:41 +0100 (0:00:00.881) 0:01:57.694 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Tuesday 28 May 2019 01:36:42 +0100 (0:00:00.495) 0:01:58.190 *********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Tuesday 28 May 2019 01:36:42 +0100 (0:00:00.123) 0:01:58.314 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Tuesday 28 May 2019 01:36:42 +0100 (0:00:00.439) 0:01:58.754 *********** TASK [container-engine/docker : get systemd version] *************************** Tuesday 28 May 2019 01:36:42 +0100 (0:00:00.151) 0:01:58.905 *********** TASK [container-engine/docker : Write docker.service systemd file] ************* Tuesday 28 May 2019 01:36:43 +0100 (0:00:00.140) 0:01:59.046 *********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Tuesday 28 May 2019 01:36:43 +0100 (0:00:00.144) 0:01:59.191 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Tuesday 28 May 2019 01:36:44 +0100 (0:00:00.895) 0:02:00.086 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Tuesday 28 May 2019 01:36:45 +0100 (0:00:00.894) 0:02:00.980 *********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Tuesday 28 May 2019 01:36:45 +0100 (0:00:00.141) 0:02:01.122 *********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Tuesday 28 May 2019 01:36:45 +0100 (0:00:00.122) 0:02:01.244 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Tuesday 28 May 2019 01:36:45 +0100 (0:00:00.434) 0:02:01.679 *********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Tuesday 28 May 2019 01:36:46 +0100 (0:00:00.513) 0:02:02.192 *********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Tuesday 28 May 2019 01:36:46 +0100 (0:00:00.121) 0:02:02.313 *********** changed: [kube3] changed: [kube1] changed: [kube2] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Tuesday 28 May 2019 01:36:49 +0100 (0:00:03.127) 0:02:05.441 *********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Tuesday 28 May 2019 01:36:59 +0100 (0:00:10.100) 0:02:15.542 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Tuesday 28 May 2019 01:37:00 +0100 (0:00:00.524) 0:02:16.067 *********** ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Tuesday 28 May 2019 01:37:00 +0100 (0:00:00.715) 0:02:16.782 *********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Tuesday 28 May 2019 01:37:01 +0100 (0:00:00.212) 0:02:16.994 *********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Tuesday 28 May 2019 01:37:01 +0100 (0:00:00.543) 0:02:17.538 *********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Tuesday 28 May 2019 01:37:02 +0100 (0:00:00.445) 0:02:17.984 *********** TASK [download : Download items] *********************************************** Tuesday 28 May 2019 01:37:02 +0100 (0:00:00.065) 0:02:18.049 *********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Tuesday 28 May 2019 01:37:03 +0100 (0:00:01.380) 0:02:19.429 *********** =============================================================================== Install packages ------------------------------------------------------- 24.90s Wait for host to be available ------------------------------------------ 16.24s Extend root VG --------------------------------------------------------- 11.64s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s gather facts from all instances ----------------------------------------- 9.45s Persist loaded modules -------------------------------------------------- 3.47s container-engine/docker : Docker | reload docker ------------------------ 3.13s kubernetes/preinstall : Create kubernetes directories ------------------- 1.86s Load required kernel modules -------------------------------------------- 1.62s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.57s Extend the root LV and FS to occupy remaining space --------------------- 1.53s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.39s download : Download items ----------------------------------------------- 1.38s Gathering Facts --------------------------------------------------------- 1.29s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.24s download : Download items ----------------------------------------------- 1.20s kubernetes/preinstall : Create cni directories -------------------------- 1.17s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.15s download : Sync container ----------------------------------------------- 1.09s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.06s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Tue May 28 01:11:37 2019 From: ci at centos.org (ci at centos.org) Date: Tue, 28 May 2019 01:11:37 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #206 In-Reply-To: <123395865.1468.1558920174661.JavaMail.jenkins@jenkins.ci.centos.org> References: <123395865.1468.1558920174661.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <973402913.1521.1559005897311.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.14 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 29 00:15:59 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 29 May 2019 00:15:59 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #378 In-Reply-To: <399011792.1518.1559002427055.JavaMail.jenkins@jenkins.ci.centos.org> References: <399011792.1518.1559002427055.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1513554786.1577.1559088959405.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.43 KB...] Total 56 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1863 0 --:--:-- --:--:-- --:--:-- 1873 100 8513k 100 8513k 0 0 13.1M 0 --:--:-- --:--:-- --:--:-- 13.1M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1915 0 --:--:-- --:--:-- --:--:-- 1911 100 38.3M 100 38.3M 0 0 36.0M 0 0:00:01 0:00:01 --:--:-- 36.0M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 640 0 --:--:-- --:--:-- --:--:-- 642 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 620 0 0 541 0 --:--:-- 0:00:01 --:--:-- 605k 100 10.7M 100 10.7M 0 0 7191k 0 0:00:01 0:00:01 --:--:-- 7191k ~/nightlyrpmXBuSbg/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmXBuSbg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmXBuSbg/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmXBuSbg ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmXBuSbg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmXBuSbg/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 26 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M d8521d1784fc45e2acce4ba6f4c1755f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.Lp_te1:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2202484159485818472.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done 4023c723 +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 115 | n51.pufty | 172.19.3.115 | pufty | 3606 | Deployed | 4023c723 | None | None | 7 | x86_64 | 1 | 2500 | None | +---------+-----------+--------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Wed May 29 00:41:40 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 29 May 2019 00:41:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #182 In-Reply-To: <1772888283.1519.1559003823770.JavaMail.jenkins@jenkins.ci.centos.org> References: <1772888283.1519.1559003823770.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <424137404.1578.1559090500937.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.10 KB...] TASK [container-engine/docker : check number of search domains] **************** Wednesday 29 May 2019 01:40:58 +0100 (0:00:00.287) 0:03:00.419 ********* TASK [container-engine/docker : check length of search domains] **************** Wednesday 29 May 2019 01:40:58 +0100 (0:00:00.312) 0:03:00.732 ********* TASK [container-engine/docker : check for minimum kernel version] ************** Wednesday 29 May 2019 01:40:59 +0100 (0:00:00.288) 0:03:01.021 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Wednesday 29 May 2019 01:40:59 +0100 (0:00:00.285) 0:03:01.307 ********* TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Wednesday 29 May 2019 01:41:00 +0100 (0:00:00.598) 0:03:01.905 ********* TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Wednesday 29 May 2019 01:41:01 +0100 (0:00:01.333) 0:03:03.239 ********* TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Wednesday 29 May 2019 01:41:01 +0100 (0:00:00.266) 0:03:03.505 ********* TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Wednesday 29 May 2019 01:41:02 +0100 (0:00:00.261) 0:03:03.767 ********* TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Wednesday 29 May 2019 01:41:02 +0100 (0:00:00.325) 0:03:04.092 ********* TASK [container-engine/docker : Configure docker repository on Fedora] ********* Wednesday 29 May 2019 01:41:02 +0100 (0:00:00.303) 0:03:04.396 ********* TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Wednesday 29 May 2019 01:41:02 +0100 (0:00:00.320) 0:03:04.716 ********* TASK [container-engine/docker : Copy yum.conf for editing] ********************* Wednesday 29 May 2019 01:41:03 +0100 (0:00:00.278) 0:03:04.995 ********* TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Wednesday 29 May 2019 01:41:03 +0100 (0:00:00.278) 0:03:05.273 ********* TASK [container-engine/docker : ensure docker packages are installed] ********** Wednesday 29 May 2019 01:41:03 +0100 (0:00:00.279) 0:03:05.553 ********* TASK [container-engine/docker : Ensure docker packages are installed] ********** Wednesday 29 May 2019 01:41:04 +0100 (0:00:00.348) 0:03:05.901 ********* TASK [container-engine/docker : get available packages on Ubuntu] ************** Wednesday 29 May 2019 01:41:04 +0100 (0:00:00.329) 0:03:06.231 ********* TASK [container-engine/docker : show available packages on ubuntu] ************* Wednesday 29 May 2019 01:41:04 +0100 (0:00:00.283) 0:03:06.515 ********* TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Wednesday 29 May 2019 01:41:05 +0100 (0:00:00.280) 0:03:06.796 ********* TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Wednesday 29 May 2019 01:41:05 +0100 (0:00:00.293) 0:03:07.089 ********* ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Wednesday 29 May 2019 01:41:07 +0100 (0:00:01.931) 0:03:09.021 ********* ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Wednesday 29 May 2019 01:41:08 +0100 (0:00:01.051) 0:03:10.073 ********* TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Wednesday 29 May 2019 01:41:08 +0100 (0:00:00.281) 0:03:10.355 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Wednesday 29 May 2019 01:41:09 +0100 (0:00:01.044) 0:03:11.399 ********* TASK [container-engine/docker : get systemd version] *************************** Wednesday 29 May 2019 01:41:09 +0100 (0:00:00.320) 0:03:11.720 ********* TASK [container-engine/docker : Write docker.service systemd file] ************* Wednesday 29 May 2019 01:41:10 +0100 (0:00:00.299) 0:03:12.019 ********* TASK [container-engine/docker : Write docker options systemd drop-in] ********** Wednesday 29 May 2019 01:41:10 +0100 (0:00:00.302) 0:03:12.322 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Wednesday 29 May 2019 01:41:12 +0100 (0:00:02.096) 0:03:14.419 ********* changed: [kube1] changed: [kube3] changed: [kube2] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Wednesday 29 May 2019 01:41:14 +0100 (0:00:01.938) 0:03:16.357 ********* TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Wednesday 29 May 2019 01:41:15 +0100 (0:00:00.414) 0:03:16.771 ********* RUNNING HANDLER [container-engine/docker : restart docker] ********************* Wednesday 29 May 2019 01:41:15 +0100 (0:00:00.267) 0:03:17.039 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Wednesday 29 May 2019 01:41:16 +0100 (0:00:01.013) 0:03:18.053 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Wednesday 29 May 2019 01:41:17 +0100 (0:00:01.115) 0:03:19.168 ********* RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Wednesday 29 May 2019 01:41:17 +0100 (0:00:00.343) 0:03:19.511 ********* changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Wednesday 29 May 2019 01:41:22 +0100 (0:00:04.331) 0:03:23.843 ********* Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Wednesday 29 May 2019 01:41:32 +0100 (0:00:10.233) 0:03:34.077 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Wednesday 29 May 2019 01:41:33 +0100 (0:00:01.324) 0:03:35.401 ********* ok: [kube2] => (item=docker) ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Wednesday 29 May 2019 01:41:34 +0100 (0:00:01.213) 0:03:36.615 ********* included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Wednesday 29 May 2019 01:41:35 +0100 (0:00:00.515) 0:03:37.130 ********* ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Wednesday 29 May 2019 01:41:36 +0100 (0:00:01.186) 0:03:38.317 ********* changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Wednesday 29 May 2019 01:41:37 +0100 (0:00:01.064) 0:03:39.381 ********* TASK [download : Download items] *********************************************** Wednesday 29 May 2019 01:41:37 +0100 (0:00:00.106) 0:03:39.487 ********* fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Wednesday 29 May 2019 01:41:40 +0100 (0:00:02.753) 0:03:42.241 ********* =============================================================================== Install packages ------------------------------------------------------- 32.61s Wait for host to be available ------------------------------------------ 24.03s gather facts from all instances ---------------------------------------- 17.87s container-engine/docker : Docker | pause while Docker restarts --------- 10.23s Persist loaded modules -------------------------------------------------- 5.84s container-engine/docker : Docker | reload docker ------------------------ 4.33s kubernetes/preinstall : Create kubernetes directories ------------------- 3.90s download : Download items ----------------------------------------------- 2.75s kubernetes/preinstall : Create cni directories -------------------------- 2.62s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.61s Load required kernel modules -------------------------------------------- 2.51s Extend root VG ---------------------------------------------------------- 2.50s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.47s kubernetes/preinstall : Enable ip forwarding ---------------------------- 2.35s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 2.13s download : Download items ----------------------------------------------- 2.12s container-engine/docker : Write docker options systemd drop-in ---------- 2.10s download : Sync container ----------------------------------------------- 2.06s kubernetes/preinstall : Hosts | populate inventory into hosts file ------ 2.03s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.99s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Wed May 29 01:14:40 2019 From: ci at centos.org (ci at centos.org) Date: Wed, 29 May 2019 01:14:40 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #207 In-Reply-To: <973402913.1521.1559005897311.JavaMail.jenkins@jenkins.ci.centos.org> References: <973402913.1521.1559005897311.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1528358074.1579.1559092480797.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.14 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 30 00:15:53 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 30 May 2019 00:15:53 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #379 In-Reply-To: <1513554786.1577.1559088959405.JavaMail.jenkins@jenkins.ci.centos.org> References: <1513554786.1577.1559088959405.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <162451506.1623.1559175353050.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 62 MB/s | 143 MB 00:02 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 1654 0 --:--:-- --:--:-- --:--:-- 1657 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 8513k 100 8513k 0 0 10.2M 0 --:--:-- --:--:-- --:--:-- 44.2M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 1961 0 --:--:-- --:--:-- --:--:-- 1965 76 38.3M 76 29.3M 0 0 37.9M 0 0:00:01 --:--:-- 0:00:01 37.9M100 38.3M 100 38.3M 0 0 44.5M 0 --:--:-- --:--:-- --:--:-- 104M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 561 0 --:--:-- --:--:-- --:--:-- 562 0 0 0 620 0 0 1615 0 --:--:-- --:--:-- --:--:-- 1615 0 10.7M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 10.7M 100 10.7M 0 0 16.1M 0 --:--:-- --:--:-- --:--:-- 81.3M ~/nightlyrpmDXF44B/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmDXF44B/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmDXF44B/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmDXF44B ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmDXF44B/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmDXF44B/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 2 minutes 24 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M fbe3f8735a374d7f97db297deacddc7c -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.PQkHU4:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins2152718497698802546.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done cd73bd94 +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 162 | n35.crusty | 172.19.2.35 | crusty | 3585 | Deployed | cd73bd94 | None | None | 7 | x86_64 | 1 | 2340 | None | +---------+------------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Thu May 30 00:37:07 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 30 May 2019 00:37:07 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #183 In-Reply-To: <424137404.1578.1559090500937.JavaMail.jenkins@jenkins.ci.centos.org> References: <424137404.1578.1559090500937.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1554586424.1632.1559176627108.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.11 KB...] TASK [container-engine/docker : check number of search domains] **************** Thursday 30 May 2019 01:36:41 +0100 (0:00:00.148) 0:01:58.436 ********** TASK [container-engine/docker : check length of search domains] **************** Thursday 30 May 2019 01:36:41 +0100 (0:00:00.125) 0:01:58.561 ********** TASK [container-engine/docker : check for minimum kernel version] ************** Thursday 30 May 2019 01:36:41 +0100 (0:00:00.125) 0:01:58.686 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Thursday 30 May 2019 01:36:41 +0100 (0:00:00.124) 0:01:58.811 ********** TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Thursday 30 May 2019 01:36:41 +0100 (0:00:00.263) 0:01:59.074 ********** TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Thursday 30 May 2019 01:36:42 +0100 (0:00:00.647) 0:01:59.722 ********** TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Thursday 30 May 2019 01:36:42 +0100 (0:00:00.114) 0:01:59.837 ********** TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Thursday 30 May 2019 01:36:42 +0100 (0:00:00.112) 0:01:59.949 ********** TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Thursday 30 May 2019 01:36:42 +0100 (0:00:00.141) 0:02:00.091 ********** TASK [container-engine/docker : Configure docker repository on Fedora] ********* Thursday 30 May 2019 01:36:43 +0100 (0:00:00.131) 0:02:00.222 ********** TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Thursday 30 May 2019 01:36:43 +0100 (0:00:00.126) 0:02:00.349 ********** TASK [container-engine/docker : Copy yum.conf for editing] ********************* Thursday 30 May 2019 01:36:43 +0100 (0:00:00.122) 0:02:00.472 ********** TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Thursday 30 May 2019 01:36:43 +0100 (0:00:00.118) 0:02:00.590 ********** TASK [container-engine/docker : ensure docker packages are installed] ********** Thursday 30 May 2019 01:36:43 +0100 (0:00:00.124) 0:02:00.715 ********** TASK [container-engine/docker : Ensure docker packages are installed] ********** Thursday 30 May 2019 01:36:43 +0100 (0:00:00.159) 0:02:00.874 ********** TASK [container-engine/docker : get available packages on Ubuntu] ************** Thursday 30 May 2019 01:36:43 +0100 (0:00:00.149) 0:02:01.023 ********** TASK [container-engine/docker : show available packages on ubuntu] ************* Thursday 30 May 2019 01:36:43 +0100 (0:00:00.123) 0:02:01.147 ********** TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Thursday 30 May 2019 01:36:44 +0100 (0:00:00.120) 0:02:01.268 ********** TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Thursday 30 May 2019 01:36:44 +0100 (0:00:00.125) 0:02:01.393 ********** ok: [kube2] ok: [kube1] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Thursday 30 May 2019 01:36:45 +0100 (0:00:00.881) 0:02:02.275 ********** ok: [kube1] ok: [kube3] ok: [kube2] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Thursday 30 May 2019 01:36:45 +0100 (0:00:00.491) 0:02:02.767 ********** TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Thursday 30 May 2019 01:36:45 +0100 (0:00:00.120) 0:02:02.887 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Thursday 30 May 2019 01:36:46 +0100 (0:00:00.435) 0:02:03.323 ********** TASK [container-engine/docker : get systemd version] *************************** Thursday 30 May 2019 01:36:46 +0100 (0:00:00.136) 0:02:03.459 ********** TASK [container-engine/docker : Write docker.service systemd file] ************* Thursday 30 May 2019 01:36:46 +0100 (0:00:00.138) 0:02:03.598 ********** TASK [container-engine/docker : Write docker options systemd drop-in] ********** Thursday 30 May 2019 01:36:46 +0100 (0:00:00.156) 0:02:03.755 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Thursday 30 May 2019 01:36:47 +0100 (0:00:00.979) 0:02:04.735 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Thursday 30 May 2019 01:36:48 +0100 (0:00:00.998) 0:02:05.733 ********** TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Thursday 30 May 2019 01:36:48 +0100 (0:00:00.143) 0:02:05.877 ********** RUNNING HANDLER [container-engine/docker : restart docker] ********************* Thursday 30 May 2019 01:36:48 +0100 (0:00:00.107) 0:02:05.984 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Thursday 30 May 2019 01:36:49 +0100 (0:00:00.450) 0:02:06.434 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Thursday 30 May 2019 01:36:49 +0100 (0:00:00.515) 0:02:06.950 ********** RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Thursday 30 May 2019 01:36:49 +0100 (0:00:00.119) 0:02:07.070 ********** changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Thursday 30 May 2019 01:36:52 +0100 (0:00:03.042) 0:02:10.113 ********** Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Thursday 30 May 2019 01:37:03 +0100 (0:00:10.098) 0:02:20.211 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Thursday 30 May 2019 01:37:03 +0100 (0:00:00.634) 0:02:20.845 ********** ok: [kube1] => (item=docker) ok: [kube3] => (item=docker) ok: [kube2] => (item=docker) TASK [download : include_tasks] ************************************************ Thursday 30 May 2019 01:37:04 +0100 (0:00:00.560) 0:02:21.406 ********** included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Thursday 30 May 2019 01:37:04 +0100 (0:00:00.210) 0:02:21.616 ********** ok: [kube1] ok: [kube2] ok: [kube3] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Thursday 30 May 2019 01:37:04 +0100 (0:00:00.522) 0:02:22.138 ********** changed: [kube1] changed: [kube2] changed: [kube3] TASK [download : container_download | create local directory for saved/loaded container images] *** Thursday 30 May 2019 01:37:05 +0100 (0:00:00.464) 0:02:22.603 ********** TASK [download : Download items] *********************************************** Thursday 30 May 2019 01:37:05 +0100 (0:00:00.065) 0:02:22.669 ********** fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Thursday 30 May 2019 01:37:06 +0100 (0:00:01.339) 0:02:24.008 ********** =============================================================================== Install packages ------------------------------------------------------- 25.72s Extend root VG --------------------------------------------------------- 16.48s Wait for host to be available ------------------------------------------ 16.33s container-engine/docker : Docker | pause while Docker restarts --------- 10.10s gather facts from all instances ----------------------------------------- 9.59s container-engine/docker : Docker | reload docker ------------------------ 3.04s Persist loaded modules -------------------------------------------------- 2.66s kubernetes/preinstall : Create kubernetes directories ------------------- 1.86s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.48s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.42s Load required kernel modules -------------------------------------------- 1.42s download : Download items ----------------------------------------------- 1.34s Extend the root LV and FS to occupy remaining space --------------------- 1.28s download : Download items ----------------------------------------------- 1.22s download : Sync container ----------------------------------------------- 1.21s Gathering Facts --------------------------------------------------------- 1.16s kubernetes/preinstall : Create cni directories -------------------------- 1.15s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.12s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.11s container-engine/docker : Write docker dns systemd drop-in -------------- 1.00s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 30 01:16:43 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 30 May 2019 01:16:43 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #208 In-Reply-To: <1528358074.1579.1559092480797.JavaMail.jenkins@jenkins.ci.centos.org> References: <1528358074.1579.1559092480797.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <315834582.1638.1559179003356.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.14 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 31 00:13:47 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 31 May 2019 00:13:47 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_gd2-nightly-rpms #380 In-Reply-To: <162451506.1623.1559175353050.JavaMail.jenkins@jenkins.ci.centos.org> References: <162451506.1623.1559175353050.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1316042004.1674.1559261627406.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 37.42 KB...] Total 100 MB/s | 143 MB 00:01 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Importing GPG key 0x352C64E5: Userid : "Fedora EPEL (7) " Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Package : epel-release-7-11.noarch (@extras) From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : mpfr-3.1.1-4.el7.x86_64 1/49 Installing : apr-1.4.8-3.el7_4.1.x86_64 2/49 Installing : apr-util-1.5.2-6.el7.x86_64 3/49 Installing : libmpc-1.0.1-3.el7.x86_64 4/49 Installing : python-ipaddress-1.0.16-2.el7.noarch 5/49 Installing : python-six-1.9.0-2.el7.noarch 6/49 Installing : cpp-4.8.5-36.el7_6.2.x86_64 7/49 Installing : kernel-headers-3.10.0-957.12.2.el7.x86_64 8/49 Installing : glibc-headers-2.17-260.el7_6.5.x86_64 9/49 Installing : glibc-devel-2.17-260.el7_6.5.x86_64 10/49 Installing : gcc-4.8.5-36.el7_6.2.x86_64 11/49 Installing : elfutils-0.172-2.el7.x86_64 12/49 Installing : pakchois-0.4-10.el7.x86_64 13/49 Installing : unzip-6.0-19.el7.x86_64 14/49 Installing : dwz-0.11-3.el7.x86_64 15/49 Installing : bzip2-1.0.6-13.el7.x86_64 16/49 Installing : usermode-1.111-5.el7.x86_64 17/49 Installing : patch-2.7.1-10.el7_5.x86_64 18/49 Installing : python-backports-1.0-8.el7.x86_64 19/49 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 20/49 Installing : python-urllib3-1.10.2-5.el7.noarch 21/49 Installing : python-requests-2.6.0-1.el7_1.noarch 22/49 Installing : python-babel-0.9.6-8.el7.noarch 23/49 Installing : distribution-gpg-keys-1.30-1.el7.noarch 24/49 Installing : mock-core-configs-30.2-1.el7.noarch 25/49 Installing : libmodman-2.0.1-8.el7.x86_64 26/49 Installing : libproxy-0.4.11-11.el7.x86_64 27/49 Installing : python-markupsafe-0.11-10.el7.x86_64 28/49 Installing : python-jinja2-2.7.2-3.el7_6.noarch 29/49 Installing : python2-distro-1.2.0-3.el7.noarch 30/49 Installing : gdb-7.6.1-114.el7.x86_64 31/49 Installing : perl-Thread-Queue-3.02-2.el7.noarch 32/49 Installing : perl-srpm-macros-1-8.el7.noarch 33/49 Installing : pigz-2.3.4-1.el7.x86_64 34/49 Installing : python2-pyroute2-0.4.13-2.el7.noarch 35/49 Installing : golang-src-1.11.5-1.el7.noarch 36/49 Installing : nettle-2.7.1-8.el7.x86_64 37/49 Installing : zip-3.0-11.el7.x86_64 38/49 Installing : redhat-rpm-config-9.1.0-87.el7.centos.noarch 39/49 Installing : mercurial-2.6.2-8.el7_4.x86_64 40/49 Installing : trousers-0.3.14-2.el7.x86_64 41/49 Installing : gnutls-3.3.29-9.el7_6.x86_64 42/49 Installing : neon-0.30.0-3.el7.x86_64 43/49 Installing : subversion-libs-1.7.14-14.el7.x86_64 44/49 Installing : subversion-1.7.14-14.el7.x86_64 45/49 Installing : golang-1.11.5-1.el7.x86_64 46/49 Installing : golang-bin-1.11.5-1.el7.x86_64 47/49 Installing : rpm-build-4.11.3-35.el7.x86_64 48/49 Installing : mock-1.4.15-1.el7.noarch 49/49 Verifying : trousers-0.3.14-2.el7.x86_64 1/49 Verifying : subversion-libs-1.7.14-14.el7.x86_64 2/49 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 3/49 Verifying : rpm-build-4.11.3-35.el7.x86_64 4/49 Verifying : mercurial-2.6.2-8.el7_4.x86_64 5/49 Verifying : apr-1.4.8-3.el7_4.1.x86_64 6/49 Verifying : zip-3.0-11.el7.x86_64 7/49 Verifying : nettle-2.7.1-8.el7.x86_64 8/49 Verifying : gcc-4.8.5-36.el7_6.2.x86_64 9/49 Verifying : golang-src-1.11.5-1.el7.noarch 10/49 Verifying : python2-pyroute2-0.4.13-2.el7.noarch 11/49 Verifying : pigz-2.3.4-1.el7.x86_64 12/49 Verifying : perl-srpm-macros-1-8.el7.noarch 13/49 Verifying : golang-1.11.5-1.el7.x86_64 14/49 Verifying : perl-Thread-Queue-3.02-2.el7.noarch 15/49 Verifying : glibc-devel-2.17-260.el7_6.5.x86_64 16/49 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 17/49 Verifying : golang-bin-1.11.5-1.el7.x86_64 18/49 Verifying : gdb-7.6.1-114.el7.x86_64 19/49 Verifying : redhat-rpm-config-9.1.0-87.el7.centos.noarch 20/49 Verifying : python-urllib3-1.10.2-5.el7.noarch 21/49 Verifying : gnutls-3.3.29-9.el7_6.x86_64 22/49 Verifying : python2-distro-1.2.0-3.el7.noarch 23/49 Verifying : python-markupsafe-0.11-10.el7.x86_64 24/49 Verifying : libmodman-2.0.1-8.el7.x86_64 25/49 Verifying : mpfr-3.1.1-4.el7.x86_64 26/49 Verifying : distribution-gpg-keys-1.30-1.el7.noarch 27/49 Verifying : python-babel-0.9.6-8.el7.noarch 28/49 Verifying : mock-1.4.15-1.el7.noarch 29/49 Verifying : apr-util-1.5.2-6.el7.x86_64 30/49 Verifying : python-backports-1.0-8.el7.x86_64 31/49 Verifying : patch-2.7.1-10.el7_5.x86_64 32/49 Verifying : libmpc-1.0.1-3.el7.x86_64 33/49 Verifying : usermode-1.111-5.el7.x86_64 34/49 Verifying : python-six-1.9.0-2.el7.noarch 35/49 Verifying : libproxy-0.4.11-11.el7.x86_64 36/49 Verifying : neon-0.30.0-3.el7.x86_64 37/49 Verifying : mock-core-configs-30.2-1.el7.noarch 38/49 Verifying : python-requests-2.6.0-1.el7_1.noarch 39/49 Verifying : bzip2-1.0.6-13.el7.x86_64 40/49 Verifying : subversion-1.7.14-14.el7.x86_64 41/49 Verifying : python-ipaddress-1.0.16-2.el7.noarch 42/49 Verifying : glibc-headers-2.17-260.el7_6.5.x86_64 43/49 Verifying : dwz-0.11-3.el7.x86_64 44/49 Verifying : unzip-6.0-19.el7.x86_64 45/49 Verifying : cpp-4.8.5-36.el7_6.2.x86_64 46/49 Verifying : pakchois-0.4-10.el7.x86_64 47/49 Verifying : elfutils-0.172-2.el7.x86_64 48/49 Verifying : kernel-headers-3.10.0-957.12.2.el7.x86_64 49/49 Installed: golang.x86_64 0:1.11.5-1.el7 mock.noarch 0:1.4.15-1.el7 rpm-build.x86_64 0:4.11.3-35.el7 Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bzip2.x86_64 0:1.0.6-13.el7 cpp.x86_64 0:4.8.5-36.el7_6.2 distribution-gpg-keys.noarch 0:1.30-1.el7 dwz.x86_64 0:0.11-3.el7 elfutils.x86_64 0:0.172-2.el7 gcc.x86_64 0:4.8.5-36.el7_6.2 gdb.x86_64 0:7.6.1-114.el7 glibc-devel.x86_64 0:2.17-260.el7_6.5 glibc-headers.x86_64 0:2.17-260.el7_6.5 gnutls.x86_64 0:3.3.29-9.el7_6 golang-bin.x86_64 0:1.11.5-1.el7 golang-src.noarch 0:1.11.5-1.el7 kernel-headers.x86_64 0:3.10.0-957.12.2.el7 libmodman.x86_64 0:2.0.1-8.el7 libmpc.x86_64 0:1.0.1-3.el7 libproxy.x86_64 0:0.4.11-11.el7 mercurial.x86_64 0:2.6.2-8.el7_4 mock-core-configs.noarch 0:30.2-1.el7 mpfr.x86_64 0:3.1.1-4.el7 neon.x86_64 0:0.30.0-3.el7 nettle.x86_64 0:2.7.1-8.el7 pakchois.x86_64 0:0.4-10.el7 patch.x86_64 0:2.7.1-10.el7_5 perl-Thread-Queue.noarch 0:3.02-2.el7 perl-srpm-macros.noarch 0:1-8.el7 pigz.x86_64 0:2.3.4-1.el7 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-requests.noarch 0:2.6.0-1.el7_1 python-six.noarch 0:1.9.0-2.el7 python-urllib3.noarch 0:1.10.2-5.el7 python2-distro.noarch 0:1.2.0-3.el7 python2-pyroute2.noarch 0:0.4.13-2.el7 redhat-rpm-config.noarch 0:9.1.0-87.el7.centos subversion.x86_64 0:1.7.14-14.el7 subversion-libs.x86_64 0:1.7.14-14.el7 trousers.x86_64 0:0.3.14-2.el7 unzip.x86_64 0:6.0-19.el7 usermode.x86_64 0:1.111-5.el7 zip.x86_64 0:3.0-11.el7 Complete! LINUX Installing dep. Version: v0.5.0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 605 0 605 0 0 2592 0 --:--:-- --:--:-- --:--:-- 2596 100 8513k 100 8513k 0 0 14.3M 0 --:--:-- --:--:-- --:--:-- 14.3M Installing gometalinter. Version: 2.0.5 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 627 0 627 0 0 3609 0 --:--:-- --:--:-- --:--:-- 3624 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 83 38.3M 83 32.1M 0 0 26.8M 0 0:00:01 0:00:01 --:--:-- 32.2M100 38.3M 100 38.3M 0 0 28.2M 0 0:00:01 0:00:01 --:--:-- 33.2M Installing etcd. Version: v3.3.9 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0100 153 0 153 0 0 889 0 --:--:-- --:--:-- --:--:-- 894 0 0 0 620 0 0 2234 0 --:--:-- --:--:-- --:--:-- 2234 100 10.7M 100 10.7M 0 0 17.3M 0 --:--:-- --:--:-- --:--:-- 17.3M ~/nightlyrpmtvtfMS/go/src/github.com/gluster/glusterd2 ~ Installing vendored packages Creating dist archive /root/nightlyrpmtvtfMS/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz Created dist archive /root/nightlyrpmtvtfMS/glusterd2-v6.0-dev.152.git54ce5f6-vendor.tar.xz ~ ~/nightlyrpmtvtfMS ~ INFO: mock.py version 1.4.15 starting (python version = 2.7.5)... Start: init plugins INFO: selinux disabled Finish: init plugins Start: run INFO: Start(/root/nightlyrpmtvtfMS/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) Start: clean chroot Finish: clean chroot Start: chroot init INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache Start: cleaning yum metadata Finish: cleaning yum metadata INFO: enabled HW Info plugin Mock Version: 1.4.15 INFO: Mock Version: 1.4.15 Start: yum install Finish: yum install Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: Start: build phase for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Finish: build setup for glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: rpmbuild glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm Start: Outputting list of installed packages Finish: Outputting list of installed packages ERROR: Exception(/root/nightlyrpmtvtfMS/rpmbuild/SRPMS/glusterd2-5.0-0.dev.152.git54ce5f6.el7.src.rpm) Config(epel-7-x86_64) 1 minutes 32 seconds INFO: Results and/or logs in: /srv/glusterd2/nightly/master/7/x86_64 INFO: Cleaning up build root ('cleanup_on_failure=True') Start: clean chroot Finish: clean chroot ERROR: Command failed: # /usr/bin/systemd-nspawn -q -M f60f6a62b2784b6ca4a954137badaa9f -D /var/lib/mock/epel-7-x86_64/root --capability=cap_ipc_lock --bind=/tmp/mock-resolv.xF7EJd:/etc/resolv.conf --setenv=LANG=en_US.UTF-8 --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOSTNAME=mock --setenv=PROMPT_COMMAND=printf "\033]0;\007" --setenv=HOME=/builddir --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PS1= \s-\v\$ -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/glusterd2.spec Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :Building remotely : True Logical operation result is TRUE Running script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done [gluster_gd2-nightly-rpms] $ /bin/sh -xe /tmp/jenkins6377845453859530091.sh + SSID_FILE= ++ cat + for ssid in '$(cat ${SSID_FILE})' + cico -q node done c8c7bdd9 +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | host_id | hostname | ip_address | chassis | used_count | current_state | comment | distro | rel | centos_version | architecture | node_pool | console_port | flavor | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ | 213 | n22.dusty | 172.19.2.86 | dusty | 3613 | Deployed | c8c7bdd9 | None | None | 7 | x86_64 | 1 | 2210 | None | +---------+-----------+-------------+---------+------------+---------------+----------+--------+------+----------------+--------------+-----------+--------------+--------+ POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 From ci at centos.org Fri May 31 00:37:04 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 31 May 2019 00:37:04 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #184 In-Reply-To: <1554586424.1632.1559176627108.JavaMail.jenkins@jenkins.ci.centos.org> References: <1554586424.1632.1559176627108.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <1205065500.1677.1559263024971.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 287.10 KB...] TASK [container-engine/docker : check number of search domains] **************** Friday 31 May 2019 01:36:38 +0100 (0:00:00.128) 0:01:57.797 ************ TASK [container-engine/docker : check length of search domains] **************** Friday 31 May 2019 01:36:38 +0100 (0:00:00.127) 0:01:57.924 ************ TASK [container-engine/docker : check for minimum kernel version] ************** Friday 31 May 2019 01:36:38 +0100 (0:00:00.125) 0:01:58.049 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | Debian] *** Friday 31 May 2019 01:36:39 +0100 (0:00:00.128) 0:01:58.178 ************ TASK [container-engine/docker : Ensure old versions of Docker are not installed. | RedHat] *** Friday 31 May 2019 01:36:39 +0100 (0:00:00.245) 0:01:58.423 ************ TASK [container-engine/docker : ensure docker-ce repository public key is installed] *** Friday 31 May 2019 01:36:39 +0100 (0:00:00.621) 0:01:59.044 ************ TASK [container-engine/docker : ensure docker-ce repository is enabled] ******** Friday 31 May 2019 01:36:40 +0100 (0:00:00.110) 0:01:59.155 ************ TASK [container-engine/docker : ensure docker-engine repository public key is installed] *** Friday 31 May 2019 01:36:40 +0100 (0:00:00.112) 0:01:59.267 ************ TASK [container-engine/docker : ensure docker-engine repository is enabled] **** Friday 31 May 2019 01:36:40 +0100 (0:00:00.146) 0:01:59.414 ************ TASK [container-engine/docker : Configure docker repository on Fedora] ********* Friday 31 May 2019 01:36:40 +0100 (0:00:00.141) 0:01:59.556 ************ TASK [container-engine/docker : Configure docker repository on RedHat/CentOS] *** Friday 31 May 2019 01:36:40 +0100 (0:00:00.124) 0:01:59.681 ************ TASK [container-engine/docker : Copy yum.conf for editing] ********************* Friday 31 May 2019 01:36:40 +0100 (0:00:00.123) 0:01:59.804 ************ TASK [container-engine/docker : Edit copy of yum.conf to set obsoletes=0] ****** Friday 31 May 2019 01:36:40 +0100 (0:00:00.124) 0:01:59.929 ************ TASK [container-engine/docker : ensure docker packages are installed] ********** Friday 31 May 2019 01:36:40 +0100 (0:00:00.125) 0:02:00.054 ************ TASK [container-engine/docker : Ensure docker packages are installed] ********** Friday 31 May 2019 01:36:41 +0100 (0:00:00.159) 0:02:00.214 ************ TASK [container-engine/docker : get available packages on Ubuntu] ************** Friday 31 May 2019 01:36:41 +0100 (0:00:00.148) 0:02:00.362 ************ TASK [container-engine/docker : show available packages on ubuntu] ************* Friday 31 May 2019 01:36:41 +0100 (0:00:00.124) 0:02:00.487 ************ TASK [container-engine/docker : Set docker pin priority to apt_preferences on Debian family] *** Friday 31 May 2019 01:36:41 +0100 (0:00:00.126) 0:02:00.613 ************ TASK [container-engine/docker : ensure service is started if docker packages are already present] *** Friday 31 May 2019 01:36:41 +0100 (0:00:00.122) 0:02:00.736 ************ ok: [kube1] ok: [kube2] ok: [kube3] [WARNING]: flush_handlers task does not support when conditional TASK [container-engine/docker : set fact for docker_version] ******************* Friday 31 May 2019 01:36:42 +0100 (0:00:00.993) 0:02:01.729 ************ ok: [kube2] ok: [kube1] ok: [kube3] TASK [container-engine/docker : check minimum docker version for docker_dns mode. You need at least docker version >= 1.12 for resolvconf_mode=docker_dns] *** Friday 31 May 2019 01:36:43 +0100 (0:00:00.540) 0:02:02.270 ************ TASK [container-engine/docker : Create docker service systemd directory if it doesn't exist] *** Friday 31 May 2019 01:36:43 +0100 (0:00:00.124) 0:02:02.395 ************ changed: [kube3] changed: [kube1] changed: [kube2] TASK [container-engine/docker : Write docker proxy drop-in] ******************** Friday 31 May 2019 01:36:43 +0100 (0:00:00.517) 0:02:02.912 ************ TASK [container-engine/docker : get systemd version] *************************** Friday 31 May 2019 01:36:43 +0100 (0:00:00.140) 0:02:03.053 ************ TASK [container-engine/docker : Write docker.service systemd file] ************* Friday 31 May 2019 01:36:44 +0100 (0:00:00.147) 0:02:03.200 ************ TASK [container-engine/docker : Write docker options systemd drop-in] ********** Friday 31 May 2019 01:36:44 +0100 (0:00:00.152) 0:02:03.353 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Write docker dns systemd drop-in] ************** Friday 31 May 2019 01:36:45 +0100 (0:00:01.096) 0:02:04.450 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : Copy docker orphan clean up script to the node] *** Friday 31 May 2019 01:36:46 +0100 (0:00:00.999) 0:02:05.449 ************ TASK [container-engine/docker : Write docker orphan clean up systemd drop-in] *** Friday 31 May 2019 01:36:46 +0100 (0:00:00.149) 0:02:05.599 ************ RUNNING HANDLER [container-engine/docker : restart docker] ********************* Friday 31 May 2019 01:36:46 +0100 (0:00:00.122) 0:02:05.722 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload systemd] ************ Friday 31 May 2019 01:36:47 +0100 (0:00:00.439) 0:02:06.162 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | reload docker.socket] ****** Friday 31 May 2019 01:36:47 +0100 (0:00:00.517) 0:02:06.680 ************ RUNNING HANDLER [container-engine/docker : Docker | reload docker] ************* Friday 31 May 2019 01:36:47 +0100 (0:00:00.120) 0:02:06.800 ************ changed: [kube1] changed: [kube2] changed: [kube3] RUNNING HANDLER [container-engine/docker : Docker | pause while Docker restarts] *** Friday 31 May 2019 01:36:50 +0100 (0:00:03.050) 0:02:09.851 ************ Pausing for 10 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) [container-engine/docker : Docker | pause while Docker restarts] Waiting for docker restart: ok: [kube1] RUNNING HANDLER [container-engine/docker : Docker | wait for docker] *********** Friday 31 May 2019 01:37:00 +0100 (0:00:10.077) 0:02:19.928 ************ changed: [kube1] changed: [kube2] changed: [kube3] TASK [container-engine/docker : ensure docker service is started and enabled] *** Friday 31 May 2019 01:37:01 +0100 (0:00:00.556) 0:02:20.485 ************ ok: [kube1] => (item=docker) ok: [kube2] => (item=docker) ok: [kube3] => (item=docker) TASK [download : include_tasks] ************************************************ Friday 31 May 2019 01:37:02 +0100 (0:00:00.600) 0:02:21.085 ************ included: /root/gcs/deploy/kubespray/roles/download/tasks/download_prep.yml for kube1, kube2, kube3 TASK [download : Register docker images info] ********************************** Friday 31 May 2019 01:37:02 +0100 (0:00:00.217) 0:02:21.302 ************ ok: [kube1] ok: [kube3] ok: [kube2] TASK [download : container_download | Create dest directory for saved/loaded container images] *** Friday 31 May 2019 01:37:02 +0100 (0:00:00.553) 0:02:21.856 ************ changed: [kube3] changed: [kube1] changed: [kube2] TASK [download : container_download | create local directory for saved/loaded container images] *** Friday 31 May 2019 01:37:03 +0100 (0:00:00.513) 0:02:22.369 ************ TASK [download : Download items] *********************************************** Friday 31 May 2019 01:37:03 +0100 (0:00:00.060) 0:02:22.429 ************ fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} included: /root/gcs/deploy/kubespray/roles/download/tasks/download_file.yml for kube1, kube2, kube3 fatal: [kube1]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube2]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} fatal: [kube3]: FAILED! => {"reason": "'delegate_to' is not a valid attribute for a TaskInclude\n\nThe error appears to be in '/root/gcs/deploy/kubespray/roles/download/tasks/download_container.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: container_download | Make download decision if pull is required by tag or sha256\n ^ here\n"} PLAY RECAP ********************************************************************* kube1 : ok=109 changed=22 unreachable=0 failed=10 skipped=116 rescued=0 ignored=0 kube2 : ok=96 changed=22 unreachable=0 failed=10 skipped=111 rescued=0 ignored=0 kube3 : ok=94 changed=22 unreachable=0 failed=10 skipped=113 rescued=0 ignored=0 Friday 31 May 2019 01:37:04 +0100 (0:00:01.325) 0:02:23.755 ************ =============================================================================== Install packages ------------------------------------------------------- 25.67s Wait for host to be available ------------------------------------------ 16.28s Extend root VG --------------------------------------------------------- 14.33s container-engine/docker : Docker | pause while Docker restarts --------- 10.08s gather facts from all instances ----------------------------------------- 9.13s Persist loaded modules -------------------------------------------------- 3.35s container-engine/docker : Docker | reload docker ------------------------ 3.05s kubernetes/preinstall : Create kubernetes directories ------------------- 2.00s Load required kernel modules -------------------------------------------- 1.65s kubernetes/preinstall : Enable ip forwarding ---------------------------- 1.61s bootstrap-os : Gather nodes hostnames ----------------------------------- 1.55s Extend the root LV and FS to occupy remaining space --------------------- 1.40s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 1.36s download : Download items ----------------------------------------------- 1.33s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.24s download : Sync container ----------------------------------------------- 1.19s download : Download items ----------------------------------------------- 1.15s kubernetes/preinstall : Create cni directories -------------------------- 1.12s Gathering Facts --------------------------------------------------------- 1.12s kubernetes/preinstall : Remove swapfile from /etc/fstab ----------------- 1.11s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Fri May 31 01:16:44 2019 From: ci at centos.org (ci at centos.org) Date: Fri, 31 May 2019 01:16:44 +0000 (UTC) Subject: [CI-results] Build failed in Jenkins: gluster_ansible-infra #209 In-Reply-To: <315834582.1638.1559179003356.JavaMail.jenkins@jenkins.ci.centos.org> References: <315834582.1638.1559179003356.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <450516586.1678.1559265404127.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 56.13 KB...] changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=4 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ./gluster-ansible-infra/tests/run-centos-ci.sh: line 29: cd: gluster-ansible-infra/roles/backend_setup/: No such file or directory --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? create ??? prepare --> Scenario: 'default' --> Action: 'create' [DEPRECATION WARNING]: docker_image_facts is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [Create] ****************************************************************** TASK [Log into a Docker registry] ********************************************** skipping: [localhost] => (item=None) TASK [Create Dockerfiles from image names] ************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Discover local Docker images] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Build an Ansible compatible image] *************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create docker network(s)] ************************************************ TASK [Determine the CMD directives] ******************************************** ok: [localhost] => (item=None) ok: [localhost] TASK [Create molecule instance(s)] ********************************************* changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) creation to complete] ******************************* FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 --> Scenario: 'default' --> Action: 'prepare' PLAY [Prepare] ***************************************************************** TASK [Gathering Facts] ********************************************************* ok: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] TASK [Install Dependency Packages] ********************************************* changed: [instance] PLAY RECAP ********************************************************************* instance : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 --> Validating schema /root/gluster-ansible-infra/roles/firewall_config/molecule/default/molecule.yml. Validation completed successfully. --> Test matrix ??? default ??? lint ??? cleanup ??? destroy ??? dependency ??? syntax ??? create ??? prepare ??? converge ??? idempotence ??? side_effect ??? verify ??? cleanup ??? destroy --> Scenario: 'default' --> Action: 'lint' --> Executing Yamllint on files found in /root/gluster-ansible-infra/roles/firewall_config/... Lint completed successfully. --> Executing Flake8 on files found in /root/gluster-ansible-infra/roles/firewall_config/molecule/default/tests/... Lint completed successfully. --> Executing Ansible Lint on /root/gluster-ansible-infra/roles/firewall_config/molecule/default/playbook.yml... [701] Role info should contain platforms /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: author /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: description /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: company /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} [703] Should change default metadata: license /root/gluster-ansible-infra/roles/firewall_config/meta/main.yml:1 {'meta/main.yml': {'__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'dependencies': [], u'galaxy_info': {u'description': u'your description', u'license': u'license (GPLv2, CC-BY, etc)', u'author': u'your name', u'company': u'your company (optional)', u'galaxy_tags': [], '__line__': 2, '__file__': u'/root/gluster-ansible-infra/roles/firewall_config/meta/main.yml', u'min_ansible_version': 1.2}, '__line__': 1}} An error occurred during the test sequence action: 'lint'. Cleaning up. --> Scenario: 'default' --> Action: 'destroy' PLAY [Destroy] ***************************************************************** TASK [Destroy molecule instance(s)] ******************************************** changed: [localhost] => (item=None) changed: [localhost] TASK [Wait for instance(s) deletion to complete] ******************************* FAILED - RETRYING: Wait for instance(s) deletion to complete (300 retries left). changed: [localhost] => (item=None) changed: [localhost] TASK [Delete docker network(s)] ************************************************ PLAY RECAP ********************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0 From ci at centos.org Thu May 2 00:39:02 2019 From: ci at centos.org (ci at centos.org) Date: Thu, 02 May 2019 00:39:02 -0000 Subject: [CI-results] Build failed in Jenkins: gluster_anteater_gcs #155 In-Reply-To: <590764804.2902.1556672743672.JavaMail.jenkins@jenkins.ci.centos.org> References: <590764804.2902.1556672743672.JavaMail.jenkins@jenkins.ci.centos.org> Message-ID: <2119563175.2957.1556757541049.JavaMail.jenkins@jenkins.ci.centos.org> See ------------------------------------------ [...truncated 242.52 KB...] ==> kube1: 30600 (guest) => 9090 (host) (adapter eth0) ==> kube1: 30800 (guest) => 9000 (host) (adapter eth0) ==> kube1: Configuring and enabling network interfaces... kube1: SSH address: 192.168.121.207:22 kube1: SSH username: vagrant kube1: SSH auth method: private key PLAY [Pre-deploy bootstrapping] ************************************************ TASK [Extend root VG] ********************************************************** Thursday 02 May 2019 01:37:30 +0100 (0:00:00.247) 0:00:00.247 ********** fatal: [kube1]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true} changed: [kube2] changed: [kube3] TASK [Extend the root LV and FS to occupy remaining space] ********************* Thursday 02 May 2019 01:37:32 +0100 (0:00:01.905) 0:00:02.153 ********** changed: [kube2] changed: [kube3] TASK [Load required kernel modules] ******************************************** Thursday 02 May 2019 01:37:33 +0100 (0:00:01.780) 0:00:03.933 ********** ok: [kube2] => (item=dm_mirror) ok: [kube3] => (item=dm_mirror) changed: [kube2] => (item=dm_snapshot) changed: [kube3] => (item=dm_snapshot) changed: [kube2] => (item=dm_thin_pool) changed: [kube3] => (item=dm_thin_pool) TASK [Persist loaded modules] ************************************************** Thursday 02 May 2019 01:37:36 +0100 (0:00:02.438) 0:00:06.371 ********** changed: [kube3] => (item=dm_mirror) changed: [kube2] => (item=dm_mirror) changed: [kube2] => (item=dm_snapshot) changed: [kube3] => (item=dm_snapshot) changed: [kube3] => (item=dm_thin_pool) changed: [kube2] => (item=dm_thin_pool) TASK [Install packages] ******************************************************** Thursday 02 May 2019 01:37:41 +0100 (0:00:05.366) 0:00:11.738 ********** changed: [kube3] => (item=socat) changed: [kube2] => (item=socat) TASK [Reboot to make layered packages available] ******************************* Thursday 02 May 2019 01:38:14 +0100 (0:00:32.943) 0:00:44.682 ********** changed: [kube3] changed: [kube2] TASK [Wait for host to be available] ******************************************* Thursday 02 May 2019 01:38:16 +0100 (0:00:02.125) 0:00:46.807 ********** ok: [kube3] ok: [kube2] PLAY [localhost] *************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: bastion PLAY [bastion[0]] ************************************************************** skipping: no hosts matched [WARNING]: Could not match supplied host pattern, ignoring: calico-rr PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [download : include_tasks] ************************************************ Thursday 02 May 2019 01:38:36 +0100 (0:00:19.389) 0:01:06.197 ********** TASK [download : Download items] *********************************************** Thursday 02 May 2019 01:38:36 +0100 (0:00:00.089) 0:01:06.287 ********** TASK [download : Sync container] *********************************************** Thursday 02 May 2019 01:38:36 +0100 (0:00:00.325) 0:01:06.613 ********** TASK [download : include_tasks] ************************************************ Thursday 02 May 2019 01:38:36 +0100 (0:00:00.314) 0:01:06.928 ********** TASK [kubespray-defaults : Configure defaults] ********************************* Thursday 02 May 2019 01:38:36 +0100 (0:00:00.099) 0:01:07.028 ********** ok: [kube2] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } ok: [kube3] => { "msg": "Check roles/kubespray-defaults/defaults/main.yml" } TASK [bootstrap-os : Fetch /etc/os-release] ************************************ Thursday 02 May 2019 01:38:37 +0100 (0:00:00.365) 0:01:07.393 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:37 +0100 (0:00:00.445) 0:01:07.839 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:37 +0100 (0:00:00.072) 0:01:07.912 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:37 +0100 (0:00:00.083) 0:01:07.995 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:37 +0100 (0:00:00.069) 0:01:08.064 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:37 +0100 (0:00:00.072) 0:01:08.137 ********** included: /root/gcs/deploy/kubespray/roles/bootstrap-os/tasks/bootstrap-centos.yml for kube2, kube3 TASK [bootstrap-os : check if atomic host] ************************************* Thursday 02 May 2019 01:38:38 +0100 (0:00:00.142) 0:01:08.279 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : set_fact] ************************************************* Thursday 02 May 2019 01:38:39 +0100 (0:00:01.205) 0:01:09.485 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Check presence of fastestmirror.conf] ********************* Thursday 02 May 2019 01:38:39 +0100 (0:00:00.332) 0:01:09.818 ********** ok: [kube3] ok: [kube2] TASK [bootstrap-os : Disable fastestmirror plugin] ***************************** Thursday 02 May 2019 01:38:40 +0100 (0:00:01.276) 0:01:11.094 ********** changed: [kube2] changed: [kube3] TASK [bootstrap-os : Add proxy to /etc/yum.conf if http_proxy is defined] ****** Thursday 02 May 2019 01:38:42 +0100 (0:00:01.904) 0:01:12.999 ********** TASK [bootstrap-os : Install libselinux-python and yum-utils for bootstrap] **** Thursday 02 May 2019 01:38:42 +0100 (0:00:00.074) 0:01:13.074 ********** TASK [bootstrap-os : Check python-pip package] ********************************* Thursday 02 May 2019 01:38:43 +0100 (0:00:00.088) 0:01:13.162 ********** TASK [bootstrap-os : Install epel-release for bootstrap] *********************** Thursday 02 May 2019 01:38:43 +0100 (0:00:00.092) 0:01:13.254 ********** TASK [bootstrap-os : Install pip for bootstrap] ******************************** Thursday 02 May 2019 01:38:43 +0100 (0:00:00.081) 0:01:13.336 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:43 +0100 (0:00:00.084) 0:01:13.420 ********** TASK [bootstrap-os : include_tasks] ******************************************** Thursday 02 May 2019 01:38:43 +0100 (0:00:00.099) 0:01:13.519 ********** TASK [bootstrap-os : Remove require tty] *************************************** Thursday 02 May 2019 01:38:43 +0100 (0:00:00.092) 0:01:13.611 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Create remote_tmp for it is used by another module] ******* Thursday 02 May 2019 01:38:44 +0100 (0:00:01.421) 0:01:15.033 ********** changed: [kube2] changed: [kube3] TASK [bootstrap-os : Gather nodes hostnames] *********************************** Thursday 02 May 2019 01:38:46 +0100 (0:00:01.872) 0:01:16.905 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed)] *** Thursday 02 May 2019 01:38:49 +0100 (0:00:02.357) 0:01:19.263 ********** ok: [kube2] ok: [kube3] TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (CoreOS and Tumbleweed only)] *** Thursday 02 May 2019 01:38:51 +0100 (0:00:02.491) 0:01:21.754 ********** TASK [bootstrap-os : Update hostname fact (CoreOS and Tumbleweed only)] ******** Thursday 02 May 2019 01:38:51 +0100 (0:00:00.089) 0:01:21.844 ********** PLAY [k8s-cluster:etcd:calico-rr] ********************************************** TASK [Gathering Facts] ********************************************************* Thursday 02 May 2019 01:38:51 +0100 (0:00:00.095) 0:01:21.939 ********** ok: [kube3] ok: [kube2] TASK [gather facts from all instances] ***************************************** Thursday 02 May 2019 01:38:53 +0100 (0:00:01.880) 0:01:23.819 ********** failed: [kube2] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true} failed: [kube3] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true} ok: [kube2 -> 192.168.121.108] => (item=kube2) ok: [kube3 -> 192.168.121.108] => (item=kube2) ok: [kube2 -> 192.168.121.206] => (item=kube3) ok: [kube3 -> 192.168.121.206] => (item=kube3) failed: [kube2] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true} failed: [kube3] (item=kube1) => {"item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true} ok: [kube3 -> 192.168.121.108] => (item=kube2) ok: [kube2 -> 192.168.121.108] => (item=kube2) ok: [kube2 -> 192.168.121.206] => (item=kube3) fatal: [kube2]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.108"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.108", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe62:b7f2"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757535", "hour": "00", "iso8601": "2019-05-02T00:38:55Z", "iso8601_basic": "20190502T003855006310", "iso8601_basic_short": "20190502T003855", "iso8601_micro": "2019-05-02T00:38:55.006558Z", "minute": "38", "month": "05", "second": "55", "time": "00:38:55", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.108", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:62:b7:f2", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c6b3567c", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c6:b3:56:7c", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-qiwsyucimcknqcnpgehklynepiizkmxz; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.108", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe62:b7f2", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:62:b7:f2", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "42018ffcf0fd4a03991f9993e87b86ea", "ansible_memfree_mb": 1478, "ansible_memory_mb": {"nocache": {"free": 1647, "used": 191}, "real": {"free": 1478, "total": 1838, "used": 360}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "42018FFC-F0FD-4A03-991F-9993E87B86EA", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArwgd837p4EpmyadxDCp/1a1Nb6jxJujeaqAFs7t/GMGVFdr45hWdpMTOzIrVeiRi/U5yIMtXvdjoQXefZaCCY=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAILRAgIb19WAGp+E0Dxe0CktQYrS4Jfta39PV87Nij40k", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDKGKfdWkCS7buWpw7PWxfDWuxfduvFd2FKMGmzvPBpyGTLuuStmo3mgGsUsN1HAnC540yo5KpJHfu3AO1FRstJo/BAiiqIJAELBS8mzlRORXy3760AkaCJzlxxMgDLlUobvFViXDqbIuBxUMG6v5K4GptyvTDp8lIgCvbhTclcLHmTB1aM/Qcm+330nAxUsTGcRwHrmfgyiqnAyaWDLA2MAHdj4UUG0NvAWmxOimsZBN6a+AVw1izjF8yCi0HjfSXA60iZx27XZA9NkZC5VRdRjFW9kz4Zf8Wgc4dMGszuuNkWJvcsyGzQ9fqOJqhV+pi6FZd6nNKHZlHo8S4zb5JZ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 31, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.206"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe22:d4bd"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757536", "hour": "00", "iso8601": "2019-05-02T00:38:56Z", "iso8601_basic": "20190502T003856428693", "iso8601_basic_short": "20190502T003856", "iso8601_micro": "2019-05-02T00:38:56.428962Z", "minute": "38", "month": "05", "second": "56", "time": "00:38:56", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:22:d4:bd", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c79eb9d1", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c7:9e:b9:d1", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-lzfugvfwhqkvzatlsxddfimlfayuaswt; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe22:d4bd", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:22:d4:bd", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "ff4692c346ad48c7abf6e61c26e9d7b9", "ansible_memfree_mb": 1479, "ansible_memory_mb": {"nocache": {"free": 1656, "used": 182}, "real": {"free": 1479, "total": 1838, "used": 359}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "FF4692C3-46AD-48C7-ABF6-E61C26E9D7B9", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB2+r2G7JmdjA7oIPSaV1T9WzPBP7Iqoa5HiYA66ozci8/0bQRErqa+1Mjb4TkxccQTzaSB2FluIb9VBt8nk6I4=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIXsNRBEIEC3Glc0k67DQt0Nw2+8yQ62ynHh4v2H8I2C", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDU9/ohe1ntu60AGJr+Nl7BI1F0dXUvLqa/aAuTZfKlt8j7L2ev6US4lFj/dMUvGCUxyk9MrD8+G3XoeUROOrRerdfyHYvLmZhA6vT1jKXz5u1gb13UnFa8LGez/32WVoWWwHf3+2YA7FvyvViVTDMEC5Rpoy7KZu//fRbUhfRAIuITGEqmhd8zjuyU+4lAiey5rTBG4MreD0ahWwbJ7441Uhima/P3cgjM7siHoa/4YZNQvYSeI3KS5pPobYmnObpi7J0asv3iXF2S2nptzhnmIOHMwFaYQ/hx3OXLFgHFHC+93yY1cybUQ+tk322SMeaiyZsD2YM/oEeSKQ32GLar", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 32, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.108"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.108", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe62:b7f2"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757538", "hour": "00", "iso8601": "2019-05-02T00:38:58Z", "iso8601_basic": "20190502T003858005510", "iso8601_basic_short": "20190502T003858", "iso8601_micro": "2019-05-02T00:38:58.005674Z", "minute": "38", "month": "05", "second": "58", "time": "00:38:58", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.108", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:62:b7:f2", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c6b3567c", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c6:b3:56:7c", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-lpnclgqjvwrgccbccqcixuxvxgmusmro; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.108", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe62:b7f2", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:62:b7:f2", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "42018ffcf0fd4a03991f9993e87b86ea", "ansible_memfree_mb": 1464, "ansible_memory_mb": {"nocache": {"free": 1645, "used": 193}, "real": {"free": 1464, "total": 1838, "used": 374}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617240, "block_size": 4096, "block_total": 7014912, "block_used": 397672, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104215040, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6617240, "block_size": 4096, "block_total": 7014912, "block_used": 397672, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104215040, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617240, "block_size": 4096, "block_total": 7014912, "block_used": 397672, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104215040, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "42018FFC-F0FD-4A03-991F-9993E87B86EA", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArwgd837p4EpmyadxDCp/1a1Nb6jxJujeaqAFs7t/GMGVFdr45hWdpMTOzIrVeiRi/U5yIMtXvdjoQXefZaCCY=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAILRAgIb19WAGp+E0Dxe0CktQYrS4Jfta39PV87Nij40k", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDKGKfdWkCS7buWpw7PWxfDWuxfduvFd2FKMGmzvPBpyGTLuuStmo3mgGsUsN1HAnC540yo5KpJHfu3AO1FRstJo/BAiiqIJAELBS8mzlRORXy3760AkaCJzlxxMgDLlUobvFViXDqbIuBxUMG6v5K4GptyvTDp8lIgCvbhTclcLHmTB1aM/Qcm+330nAxUsTGcRwHrmfgyiqnAyaWDLA2MAHdj4UUG0NvAWmxOimsZBN6a+AVw1izjF8yCi0HjfSXA60iZx27XZA9NkZC5VRdRjFW9kz4Zf8Wgc4dMGszuuNkWJvcsyGzQ9fqOJqhV+pi6FZd6nNKHZlHo8S4zb5JZ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 34, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.206"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe22:d4bd"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757539", "hour": "00", "iso8601": "2019-05-02T00:38:59Z", "iso8601_basic": "20190502T003859430143", "iso8601_basic_short": "20190502T003859", "iso8601_micro": "2019-05-02T00:38:59.430317Z", "minute": "38", "month": "05", "second": "59", "time": "00:38:59", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:22:d4:bd", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c79eb9d1", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c7:9e:b9:d1", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-ddkqlpfrbbhhozzmtitfttbcenxyrnou; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe22:d4bd", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:22:d4:bd", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "ff4692c346ad48c7abf6e61c26e9d7b9", "ansible_memfree_mb": 1463, "ansible_memory_mb": {"nocache": {"free": 1654, "used": 184}, "real": {"free": 1463, "total": 1838, "used": 375}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "FF4692C3-46AD-48C7-ABF6-E61C26E9D7B9", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB2+r2G7JmdjA7oIPSaV1T9WzPBP7Iqoa5HiYA66ozci8/0bQRErqa+1Mjb4TkxccQTzaSB2FluIb9VBt8nk6I4=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIXsNRBEIEC3Glc0k67DQt0Nw2+8yQ62ynHh4v2H8I2C", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDU9/ohe1ntu60AGJr+Nl7BI1F0dXUvLqa/aAuTZfKlt8j7L2ev6US4lFj/dMUvGCUxyk9MrD8+G3XoeUROOrRerdfyHYvLmZhA6vT1jKXz5u1gb13UnFa8LGez/32WVoWWwHf3+2YA7FvyvViVTDMEC5Rpoy7KZu//fRbUhfRAIuITGEqmhd8zjuyU+4lAiey5rTBG4MreD0ahWwbJ7441Uhima/P3cgjM7siHoa/4YZNQvYSeI3KS5pPobYmnObpi7J0asv3iXF2S2nptzhnmIOHMwFaYQ/hx3OXLFgHFHC+93yY1cybUQ+tk322SMeaiyZsD2YM/oEeSKQ32GLar", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 35, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} ok: [kube3 -> 192.168.121.206] => (item=kube3) fatal: [kube3]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.108"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.108", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe62:b7f2"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757535", "hour": "00", "iso8601": "2019-05-02T00:38:55Z", "iso8601_basic": "20190502T003855026012", "iso8601_basic_short": "20190502T003855", "iso8601_micro": "2019-05-02T00:38:55.026191Z", "minute": "38", "month": "05", "second": "55", "time": "00:38:55", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.108", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:62:b7:f2", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c6b3567c", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c6:b3:56:7c", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-otwmiruoqungkzwyphyiwwvtlhpykcry; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.108", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe62:b7f2", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:62:b7:f2", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "42018ffcf0fd4a03991f9993e87b86ea", "ansible_memfree_mb": 1479, "ansible_memory_mb": {"nocache": {"free": 1648, "used": 190}, "real": {"free": 1479, "total": 1838, "used": 359}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6620729, "block_size": 4096, "block_total": 7014912, "block_used": 394183, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004254, "inode_total": 14034944, "inode_used": 30690, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27118505984, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "42018FFC-F0FD-4A03-991F-9993E87B86EA", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArwgd837p4EpmyadxDCp/1a1Nb6jxJujeaqAFs7t/GMGVFdr45hWdpMTOzIrVeiRi/U5yIMtXvdjoQXefZaCCY=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAILRAgIb19WAGp+E0Dxe0CktQYrS4Jfta39PV87Nij40k", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDKGKfdWkCS7buWpw7PWxfDWuxfduvFd2FKMGmzvPBpyGTLuuStmo3mgGsUsN1HAnC540yo5KpJHfu3AO1FRstJo/BAiiqIJAELBS8mzlRORXy3760AkaCJzlxxMgDLlUobvFViXDqbIuBxUMG6v5K4GptyvTDp8lIgCvbhTclcLHmTB1aM/Qcm+330nAxUsTGcRwHrmfgyiqnAyaWDLA2MAHdj4UUG0NvAWmxOimsZBN6a+AVw1izjF8yCi0HjfSXA60iZx27XZA9NkZC5VRdRjFW9kz4Zf8Wgc4dMGszuuNkWJvcsyGzQ9fqOJqhV+pi6FZd6nNKHZlHo8S4zb5JZ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 31, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.206"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe22:d4bd"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757536", "hour": "00", "iso8601": "2019-05-02T00:38:56Z", "iso8601_basic": "20190502T003856392816", "iso8601_basic_short": "20190502T003856", "iso8601_micro": "2019-05-02T00:38:56.393033Z", "minute": "38", "month": "05", "second": "56", "time": "00:38:56", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:22:d4:bd", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c79eb9d1", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c7:9e:b9:d1", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-gsppqyxjqgufbjvnpjxcvqgahrggjcih; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe22:d4bd", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:22:d4:bd", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "ff4692c346ad48c7abf6e61c26e9d7b9", "ansible_memfree_mb": 1480, "ansible_memory_mb": {"nocache": {"free": 1657, "used": 181}, "real": {"free": 1480, "total": 1838, "used": 358}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6618199, "block_size": 4096, "block_total": 7014912, "block_used": 396713, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004111, "inode_total": 14034944, "inode_used": 30833, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27108143104, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6618151, "block_size": 4096, "block_total": 7014912, "block_used": 396761, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004106, "inode_total": 14034944, "inode_used": 30838, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27107946496, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "FF4692C3-46AD-48C7-ABF6-E61C26E9D7B9", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB2+r2G7JmdjA7oIPSaV1T9WzPBP7Iqoa5HiYA66ozci8/0bQRErqa+1Mjb4TkxccQTzaSB2FluIb9VBt8nk6I4=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIXsNRBEIEC3Glc0k67DQt0Nw2+8yQ62ynHh4v2H8I2C", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDU9/ohe1ntu60AGJr+Nl7BI1F0dXUvLqa/aAuTZfKlt8j7L2ev6US4lFj/dMUvGCUxyk9MrD8+G3XoeUROOrRerdfyHYvLmZhA6vT1jKXz5u1gb13UnFa8LGez/32WVoWWwHf3+2YA7FvyvViVTDMEC5Rpoy7KZu//fRbUhfRAIuITGEqmhd8zjuyU+4lAiey5rTBG4MreD0ahWwbJ7441Uhima/P3cgjM7siHoa/4YZNQvYSeI3KS5pPobYmnObpi7J0asv3iXF2S2nptzhnmIOHMwFaYQ/hx3OXLFgHFHC+93yY1cybUQ+tk322SMeaiyZsD2YM/oEeSKQ32GLar", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 32, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}, {"_ansible_ignore_errors": null, "_ansible_item_label": "kube1", "_ansible_item_result": true, "item": "kube1", "msg": "SSH Error: data could not be sent to remote host \"192.168.121.207\". Make sure this host can be reached over ssh", "unreachable": true}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube2", "ansible_host": "192.168.121.108"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube2", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.108", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe62:b7f2"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757537", "hour": "00", "iso8601": "2019-05-02T00:38:57Z", "iso8601_basic": "20190502T003857998368", "iso8601_basic_short": "20190502T003857", "iso8601_micro": "2019-05-02T00:38:57.998555Z", "minute": "38", "month": "05", "second": "57", "time": "00:38:57", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.108", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:62:b7:f2", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-y6m62w-WUbc-18QU-epOU-2Ksb-YU2v-ap0Ggu"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c6b3567c", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c6:b3:56:7c", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-ontfkjxiprwcatozskpsdvjtmutmfolp; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.108", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe62:b7f2", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:62:b7:f2", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube2", "ansible_hostname": "kube2", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "42018ffcf0fd4a03991f9993e87b86ea", "ansible_memfree_mb": 1464, "ansible_memory_mb": {"nocache": {"free": 1645, "used": 193}, "real": {"free": 1464, "total": 1838, "used": 374}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617322, "block_size": 4096, "block_total": 7014912, "block_used": 397590, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004086, "inode_total": 14034944, "inode_used": 30858, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104550912, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 51418, "block_size": 4096, "block_total": 75945, "block_used": 24527, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153279, "inode_total": 153600, "inode_used": 321, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 210608128, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6617240, "block_size": 4096, "block_total": 7014912, "block_used": 397672, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104215040, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6617240, "block_size": 4096, "block_total": 7014912, "block_used": 397672, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004082, "inode_total": 14034944, "inode_used": 30862, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27104215040, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube2", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "42018FFC-F0FD-4A03-991F-9993E87B86EA", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBArwgd837p4EpmyadxDCp/1a1Nb6jxJujeaqAFs7t/GMGVFdr45hWdpMTOzIrVeiRi/U5yIMtXvdjoQXefZaCCY=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAILRAgIb19WAGp+E0Dxe0CktQYrS4Jfta39PV87Nij40k", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDKGKfdWkCS7buWpw7PWxfDWuxfduvFd2FKMGmzvPBpyGTLuuStmo3mgGsUsN1HAnC540yo5KpJHfu3AO1FRstJo/BAiiqIJAELBS8mzlRORXy3760AkaCJzlxxMgDLlUobvFViXDqbIuBxUMG6v5K4GptyvTDp8lIgCvbhTclcLHmTB1aM/Qcm+330nAxUsTGcRwHrmfgyiqnAyaWDLA2MAHdj4UUG0NvAWmxOimsZBN6a+AVw1izjF8yCi0HjfSXA60iZx27XZA9NkZC5VRdRjFW9kz4Zf8Wgc4dMGszuuNkWJvcsyGzQ9fqOJqhV+pi6FZd6nNKHZlHo8S4zb5JZ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 34, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube2"}, {"_ansible_delegated_vars": {"ansible_delegated_host": "kube3", "ansible_host": "192.168.121.206"}, "_ansible_ignore_errors": null, "_ansible_item_label": "kube3", "_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "_ansible_verbose_override": true, "ansible_facts": {"ansible_all_ipv4_addresses": ["192.168.121.206", "172.17.0.1"], "ansible_all_ipv6_addresses": ["fe80::5054:ff:fe22:d4bd"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2011", "ansible_bios_version": "0.5.1", "ansible_cmdline": {"BOOT_IMAGE": "/ostree/centos-atomic-host-ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/vmlinuz-3.10.0-957.1.3.el7.x86_64", "console": "ttyS0,115200n8", "crashkernel": "auto", "no_timer_check": true, "ostree": "/ostree/boot.1/centos-atomic-host/ab5152e674e2dc0e949e8fd5f830ee7ebd51491f3f6fe4acd995fef761c6ab95/0", "rd.lvm.lv": "atomicos/root", "root": "/dev/mapper/atomicos-root"}, "ansible_date_time": {"date": "2019-05-02", "day": "02", "epoch": "1556757539", "hour": "00", "iso8601": "2019-05-02T00:38:59Z", "iso8601_basic": "20190502T003859717175", "iso8601_basic_short": "20190502T003859", "iso8601_micro": "2019-05-02T00:38:59.717340Z", "minute": "38", "month": "05", "second": "59", "time": "00:38:59", "tz": "UTC", "tz_offset": "+0000", "weekday": "Thursday", "weekday_number": "4", "weeknumber": "17", "year": "2019"}, "ansible_default_ipv4": {"address": "192.168.121.206", "alias": "eth0", "broadcast": "192.168.121.255", "gateway": "192.168.121.1", "interface": "eth0", "macaddress": "52:54:00:22:d4:bd", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.121.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {"dm-0": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "vda2": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "vdb": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"]}, "labels": {}, "masters": {"vda2": ["dm-0"], "vdb": ["dm-0"]}, "uuids": {"dm-0": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"], "vda1": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}}, "ansible_devices": {"dm-0": {"holders": [], "host": "", "links": {"ids": ["dm-name-atomicos-root", "dm-uuid-LVM-66eMHV6iDKuNpuUkbfaS59FexWnof79axp8dVuJg8PQpi4u2jEXiS2onl9tiFl4L"], "labels": [], "masters": [], "uuids": ["3aa470ac-9a1b-44c5-871a-d72d513af0e2"]}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "56139776", "sectorsize": "512", "size": "26.77 GB", "support_discard": "0", "vendor": null, "virtual": 1}, "vda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"vda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["5804e2ea-a86e-45e1-92c8-f225b7fb60e0"]}, "sectors": "614400", "sectorsize": 512, "size": "300.00 MB", "start": "2048", "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, "vda2": {"holders": ["atomicos-root"], "links": {"ids": ["lvm-pv-uuid-Igwjxr-wF4v-rdeh-DZGI-IDKO-Jk6L-s4N1kl"], "labels": [], "masters": ["dm-0"], "uuids": []}, "sectors": "20355072", "sectorsize": 512, "size": "9.71 GB", "start": "616448", "uuid": null}}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "23068672", "sectorsize": "512", "size": "11.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdb": {"holders": ["atomicos-root"], "host": "", "links": {"ids": ["lvm-pv-uuid-CQHcf5-D0Xn-5UA4-e5xN-xVKm-9p7S-fquWcF"], "labels": [], "masters": ["dm-0"], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdc": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vdd": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}, "vde": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2147483648", "sectorsize": "512", "size": "1.00 TB", "support_discard": "0", "vendor": "0x1af4", "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7.6.1810", "ansible_dns": {"nameservers": ["192.168.121.1"]}, "ansible_docker0": {"active": false, "device": "docker0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "off [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "on", "tx_gre_csum_segmentation": "on", "tx_gre_segmentation": "on", "tx_gso_partial": "on", "tx_gso_robust": "on", "tx_ipip_segmentation": "on", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "on", "tx_udp_tnl_segmentation": "on", "tx_vlan_offload": "on", "tx_vlan_stag_hw_insert": "on", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "id": "8000.0242c79eb9d1", "interfaces": [], "ipv4": {"address": "172.17.0.1", "broadcast": "global", "netmask": "255.255.0.0", "network": "172.17.0.0"}, "macaddress": "02:42:c7:9e:b9:d1", "mtu": 1500, "promisc": false, "stp": false, "timestamping": ["rx_software", "software"], "type": "bridge"}, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HOME": "/root", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/mail/vagrant", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/var/home/vagrant", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-qwdkunlgoheqkqotewpkodzdcltjichx; /usr/bin/python", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "vagrant", "TERM": "unknown", "USER": "root", "USERNAME": "root", "XDG_SESSION_ID": "1", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "192.168.121.206", "broadcast": "192.168.121.255", "netmask": "255.255.255.0", "network": "192.168.121.0"}, "ipv6": [{"address": "fe80::5054:ff:fe22:d4bd", "prefix": "64", "scope": "link"}], "macaddress": "52:54:00:22:d4:bd", "module": "virtio_net", "mtu": 1500, "pciid": "virtio8", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "kube3", "ansible_hostname": "kube3", "ansible_interfaces": ["lo", "docker0", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1994-05.com.redhat:7cbffc69d119", "ansible_kernel": "3.10.0-957.1.3.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_lvm": {"lvs": {"root": {"size_g": "26.77", "vg": "atomicos"}}, "pvs": {"/dev/vda2": {"free_g": "0", "size_g": "9.70", "vg": "atomicos"}, "/dev/vdb": {"free_g": "2.93", "size_g": "20.00", "vg": "atomicos"}}, "vgs": {"atomicos": {"free_g": "2.93", "num_lvs": "1", "num_pvs": "2", "size_g": "29.70"}}}, "ansible_machine": "x86_64", "ansible_machine_id": "ff4692c346ad48c7abf6e61c26e9d7b9", "ansible_memfree_mb": 1456, "ansible_memory_mb": {"nocache": {"free": 1647, "used": 191}, "real": {"free": 1456, "total": 1838, "used": 382}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 1838, "ansible_mounts": [{"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/sysroot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/usr", "options": "ro,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 54635, "block_size": 4096, "block_total": 75945, "block_used": 21310, "device": "/dev/vda1", "fstype": "xfs", "inode_available": 153280, "inode_total": 153600, "inode_used": 320, "mount": "/boot", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 223784960, "size_total": 311070720, "uuid": "5804e2ea-a86e-45e1-92c8-f225b7fb60e0"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/containers", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}, {"block_available": 6615848, "block_size": 4096, "block_total": 7014912, "block_used": 399064, "device": "/dev/mapper/atomicos-root", "fstype": "xfs", "inode_available": 14004056, "inode_total": 14034944, "inode_used": 30888, "mount": "/var/lib/docker/overlay2", "options": "rw,seclabel,relatime,attr2,inode64,noquota,bind", "size_available": 27098513408, "size_total": 28733079552, "uuid": "3aa470ac-9a1b-44c5-871a-d72d513af0e2"}], "ansible_nodename": "kube3", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "atomic_container", "ansible_processor": ["0", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE", "1", "AuthenticAMD", "AMD Opteron(tm) Processor 4365 EE"], "ansible_processor_cores": 1, "ansible_processor_count": 2, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "KVM", "ansible_product_serial": "NA", "ansible_product_uuid": "FF4692C3-46AD-48C7-ABF6-E61C26E9D7B9", "ansible_product_version": "RHEL 7.0.0 PC (i440FX + PIIX, 1996)", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB2+r2G7JmdjA7oIPSaV1T9WzPBP7Iqoa5HiYA66ozci8/0bQRErqa+1Mjb4TkxccQTzaSB2FluIb9VBt8nk6I4=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIIXsNRBEIEC3Glc0k67DQt0Nw2+8yQ62ynHh4v2H8I2C", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDU9/ohe1ntu60AGJr+Nl7BI1F0dXUvLqa/aAuTZfKlt8j7L2ev6US4lFj/dMUvGCUxyk9MrD8+G3XoeUROOrRerdfyHYvLmZhA6vT1jKXz5u1gb13UnFa8LGez/32WVoWWwHf3+2YA7FvyvViVTDMEC5Rpoy7KZu//fRbUhfRAIuITGEqmhd8zjuyU+4lAiey5rTBG4MreD0ahWwbJ7441Uhima/P3cgjM7siHoa/4YZNQvYSeI3KS5pPobYmnObpi7J0asv3iXF2S2nptzhnmIOHMwFaYQ/hx3OXLFgHFHC+93yY1cybUQ+tk322SMeaiyZsD2YM/oEeSKQ32GLar", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Red Hat", "ansible_uptime_seconds": 35, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": ["all"], "module_setup": true}, "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": "*", "gather_subset": ["all"], "gather_timeout": 10}}, "item": "kube3"}]} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @/root/gcs/deploy/vagrant-playbook.retry PLAY RECAP ********************************************************************* kube1 : ok=0 changed=0 unreachable=1 failed=0 kube2 : ok=19 changed=8 unreachable=1 failed=0 kube3 : ok=19 changed=8 unreachable=1 failed=0 Thursday 02 May 2019 01:39:00 +0100 (0:00:06.817) 0:01:30.637 ********** =============================================================================== Install packages ------------------------------------------------------- 32.94s Wait for host to be available ------------------------------------------ 19.39s gather facts from all instances ----------------------------------------- 6.82s Persist loaded modules -------------------------------------------------- 5.37s bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS and Tumbleweed) --- 2.49s Load required kernel modules -------------------------------------------- 2.44s bootstrap-os : Gather nodes hostnames ----------------------------------- 2.36s Reboot to make layered packages available ------------------------------- 2.13s Extend root VG ---------------------------------------------------------- 1.91s bootstrap-os : Disable fastestmirror plugin ----------------------------- 1.91s Gathering Facts --------------------------------------------------------- 1.88s bootstrap-os : Create remote_tmp for it is used by another module ------- 1.87s Extend the root LV and FS to occupy remaining space --------------------- 1.78s bootstrap-os : Remove require tty --------------------------------------- 1.42s bootstrap-os : Check presence of fastestmirror.conf --------------------- 1.28s bootstrap-os : check if atomic host ------------------------------------- 1.21s bootstrap-os : Fetch /etc/os-release ------------------------------------ 0.45s kubespray-defaults : Configure defaults --------------------------------- 0.37s bootstrap-os : set_fact ------------------------------------------------- 0.33s download : Download items ----------------------------------------------- 0.33s ==> kube3: An error occurred. The error will be shown after all tasks complete. An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'kube3' machine. Please handle this error then try again: Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Build step 'Execute shell' marked build as failure Performing Post build task... Could not match :Build started : False Logical operation result is FALSE Skipping script : # cico-node-done-from-ansible.sh # A script that releases nodes from a SSID file written by SSID_FILE=${SSID_FILE:-$WORKSPACE/cico-ssid} for ssid in $(cat ${SSID_FILE}) do cico -q node done $ssid done END OF POST BUILD TASK : 0